Jan 22 12:49:11 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 12:49:11 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 12:49:11 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost kernel: BIOS-provided physical RAM map:
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 22 12:49:11 localhost kernel: NX (Execute Disable) protection: active
Jan 22 12:49:11 localhost kernel: APIC: Static calls initialized
Jan 22 12:49:11 localhost kernel: SMBIOS 2.8 present.
Jan 22 12:49:11 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 22 12:49:11 localhost kernel: Hypervisor detected: KVM
Jan 22 12:49:11 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 12:49:11 localhost kernel: kvm-clock: using sched offset of 5016221411 cycles
Jan 22 12:49:11 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 12:49:11 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 22 12:49:11 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 22 12:49:11 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 22 12:49:11 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 22 12:49:11 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 12:49:11 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 12:49:11 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 22 12:49:11 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 22 12:49:11 localhost kernel: Using GB pages for direct mapping
Jan 22 12:49:11 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 12:49:11 localhost kernel: ACPI: Early table checksum verification disabled
Jan 22 12:49:11 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 22 12:49:11 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 22 12:49:11 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 22 12:49:11 localhost kernel: No NUMA configuration found
Jan 22 12:49:11 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 22 12:49:11 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 22 12:49:11 localhost kernel: Zone ranges:
Jan 22 12:49:11 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 12:49:11 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 12:49:11 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel:   Device   empty
Jan 22 12:49:11 localhost kernel: Movable zone start for each node
Jan 22 12:49:11 localhost kernel: Early memory node ranges
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 12:49:11 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 12:49:11 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 12:49:11 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 12:49:11 localhost kernel: TSC deadline timer available
Jan 22 12:49:11 localhost kernel: CPU topo: Max. logical packages:   8
Jan 22 12:49:11 localhost kernel: CPU topo: Max. logical dies:       8
Jan 22 12:49:11 localhost kernel: CPU topo: Max. dies per package:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Max. threads per core:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Num. cores per package:     1
Jan 22 12:49:11 localhost kernel: CPU topo: Num. threads per package:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 22 12:49:11 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 12:49:11 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 22 12:49:11 localhost kernel: Booting paravirtualized kernel on KVM
Jan 22 12:49:11 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 12:49:11 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 22 12:49:11 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 22 12:49:11 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 22 12:49:11 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 22 12:49:11 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 22 12:49:11 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 12:49:11 localhost kernel: random: crng init done
Jan 22 12:49:11 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 12:49:11 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 12:49:11 localhost kernel: Fallback order for Node 0: 0 
Jan 22 12:49:11 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 12:49:11 localhost kernel: Policy zone: Normal
Jan 22 12:49:11 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 12:49:11 localhost kernel: software IO TLB: area num 8.
Jan 22 12:49:11 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 12:49:11 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 12:49:11 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 12:49:11 localhost kernel: Dynamic Preempt: voluntary
Jan 22 12:49:11 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 12:49:11 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 22 12:49:11 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 22 12:49:11 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 12:49:11 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 22 12:49:11 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 22 12:49:11 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 12:49:11 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 12:49:11 localhost kernel: Console: colour VGA+ 80x25
Jan 22 12:49:11 localhost kernel: printk: console [ttyS0] enabled
Jan 22 12:49:11 localhost kernel: ACPI: Core revision 20230331
Jan 22 12:49:11 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 12:49:11 localhost kernel: x2apic enabled
Jan 22 12:49:11 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 12:49:11 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 12:49:11 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 22 12:49:11 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 12:49:11 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 12:49:11 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 12:49:11 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 12:49:11 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 12:49:11 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 12:49:11 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 22 12:49:11 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 22 12:49:11 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 12:49:11 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 12:49:11 localhost kernel: x86/bugs: return thunk changed
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 12:49:11 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 22 12:49:11 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 22 12:49:11 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 22 12:49:11 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 12:49:11 localhost kernel: landlock: Up and running.
Jan 22 12:49:11 localhost kernel: Yama: becoming mindful.
Jan 22 12:49:11 localhost kernel: SELinux:  Initializing.
Jan 22 12:49:11 localhost kernel: LSM support for eBPF active
Jan 22 12:49:11 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 22 12:49:11 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 12:49:11 localhost kernel: ... version:                0
Jan 22 12:49:11 localhost kernel: ... bit width:              48
Jan 22 12:49:11 localhost kernel: ... generic registers:      6
Jan 22 12:49:11 localhost kernel: ... value mask:             0000ffffffffffff
Jan 22 12:49:11 localhost kernel: ... max period:             00007fffffffffff
Jan 22 12:49:11 localhost kernel: ... fixed-purpose events:   0
Jan 22 12:49:11 localhost kernel: ... event mask:             000000000000003f
Jan 22 12:49:11 localhost kernel: signal: max sigframe size: 1776
Jan 22 12:49:11 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 22 12:49:11 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 22 12:49:11 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 22 12:49:11 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 22 12:49:11 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 22 12:49:11 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 22 12:49:11 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 22 12:49:11 localhost kernel: node 0 deferred pages initialised in 15ms
Jan 22 12:49:11 localhost kernel: Memory: 7763684K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 22 12:49:11 localhost kernel: devtmpfs: initialized
Jan 22 12:49:11 localhost kernel: x86/mm: Memory block size: 128MB
Jan 22 12:49:11 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 12:49:11 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 22 12:49:11 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 12:49:11 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 12:49:11 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 22 12:49:11 localhost kernel: audit: type=2000 audit(1769086148.638:1): state=initialized audit_enabled=0 res=1
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 12:49:11 localhost kernel: cpuidle: using governor menu
Jan 22 12:49:11 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 12:49:11 localhost kernel: PCI: Using configuration type 1 for base access
Jan 22 12:49:11 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 22 12:49:11 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 12:49:11 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 12:49:11 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 12:49:11 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 12:49:11 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 12:49:11 localhost kernel: Demotion targets for Node 0: null
Jan 22 12:49:11 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 12:49:11 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 12:49:11 localhost kernel: ACPI: Interpreter enabled
Jan 22 12:49:11 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 22 12:49:11 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 12:49:11 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 12:49:11 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 12:49:11 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 22 12:49:11 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 12:49:11 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [3] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [4] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [5] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [6] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [7] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [8] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [9] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [10] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [11] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [12] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [13] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [14] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [15] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [16] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [17] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [18] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [19] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [20] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [21] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [22] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [23] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [24] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [25] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [26] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [27] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [28] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [29] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [30] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [31] registered
Jan 22 12:49:11 localhost kernel: PCI host bridge to bus 0000:00
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 22 12:49:11 localhost kernel: iommu: Default domain type: Translated
Jan 22 12:49:11 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 12:49:11 localhost kernel: SCSI subsystem initialized
Jan 22 12:49:11 localhost kernel: ACPI: bus type USB registered
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbfs
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver hub
Jan 22 12:49:11 localhost kernel: usbcore: registered new device driver usb
Jan 22 12:49:11 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 12:49:11 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 12:49:11 localhost kernel: PTP clock support registered
Jan 22 12:49:11 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 22 12:49:11 localhost kernel: NetLabel: Initializing
Jan 22 12:49:11 localhost kernel: NetLabel:  domain hash size = 128
Jan 22 12:49:11 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 12:49:11 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 12:49:11 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 22 12:49:11 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 22 12:49:11 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 22 12:49:11 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 12:49:11 localhost kernel: vgaarb: loaded
Jan 22 12:49:11 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 12:49:11 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 12:49:11 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 12:49:11 localhost kernel: pnp: PnP ACPI init
Jan 22 12:49:11 localhost kernel: pnp 00:03: [dma 2]
Jan 22 12:49:11 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 22 12:49:11 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 12:49:11 localhost kernel: NET: Registered PF_INET protocol family
Jan 22 12:49:11 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 12:49:11 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 12:49:11 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 12:49:11 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 12:49:11 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 12:49:11 localhost kernel: NET: Registered PF_XDP protocol family
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 22 12:49:11 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73738 usecs
Jan 22 12:49:11 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 22 12:49:11 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 12:49:11 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 22 12:49:11 localhost kernel: ACPI: bus type thunderbolt registered
Jan 22 12:49:11 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 22 12:49:11 localhost kernel: Initialise system trusted keyrings
Jan 22 12:49:11 localhost kernel: Key type blacklist registered
Jan 22 12:49:11 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 12:49:11 localhost kernel: zbud: loaded
Jan 22 12:49:11 localhost kernel: integrity: Platform Keyring initialized
Jan 22 12:49:11 localhost kernel: integrity: Machine keyring initialized
Jan 22 12:49:11 localhost kernel: Freeing initrd memory: 87956K
Jan 22 12:49:11 localhost kernel: NET: Registered PF_ALG protocol family
Jan 22 12:49:11 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 22 12:49:11 localhost kernel: Key type asymmetric registered
Jan 22 12:49:11 localhost kernel: Asymmetric key parser 'x509' registered
Jan 22 12:49:11 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 12:49:11 localhost kernel: io scheduler mq-deadline registered
Jan 22 12:49:11 localhost kernel: io scheduler kyber registered
Jan 22 12:49:11 localhost kernel: io scheduler bfq registered
Jan 22 12:49:11 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 12:49:11 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 12:49:11 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 12:49:11 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 22 12:49:11 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 12:49:11 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 12:49:11 localhost kernel: Non-volatile memory driver v1.3
Jan 22 12:49:11 localhost kernel: rdac: device handler registered
Jan 22 12:49:11 localhost kernel: hp_sw: device handler registered
Jan 22 12:49:11 localhost kernel: emc: device handler registered
Jan 22 12:49:11 localhost kernel: alua: device handler registered
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 22 12:49:11 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 12:49:11 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 12:49:11 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 22 12:49:11 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 12:49:11 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 22 12:49:11 localhost kernel: hub 1-0:1.0: USB hub found
Jan 22 12:49:11 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 12:49:11 localhost kernel: usbserial: USB Serial support registered for generic
Jan 22 12:49:11 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 12:49:11 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 12:49:11 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 12:49:11 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 22 12:49:11 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T12:49:10 UTC (1769086150)
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 22 12:49:11 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 12:49:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 12:49:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 12:49:11 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbhid
Jan 22 12:49:11 localhost kernel: usbhid: USB HID core driver
Jan 22 12:49:11 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 22 12:49:11 localhost kernel: Initializing XFRM netlink socket
Jan 22 12:49:11 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 22 12:49:11 localhost kernel: Segment Routing with IPv6
Jan 22 12:49:11 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 22 12:49:11 localhost kernel: mpls_gso: MPLS GSO support
Jan 22 12:49:11 localhost kernel: IPI shorthand broadcast: enabled
Jan 22 12:49:11 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 12:49:11 localhost kernel: AES CTR mode by8 optimization enabled
Jan 22 12:49:11 localhost kernel: sched_clock: Marking stable (2892043577, 148805853)->(3262543804, -221694374)
Jan 22 12:49:11 localhost kernel: registered taskstats version 1
Jan 22 12:49:11 localhost kernel: Loading compiled-in X.509 certificates
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 12:49:11 localhost kernel: Demotion targets for Node 0: null
Jan 22 12:49:11 localhost kernel: page_owner is disabled
Jan 22 12:49:11 localhost kernel: Key type .fscrypt registered
Jan 22 12:49:11 localhost kernel: Key type fscrypt-provisioning registered
Jan 22 12:49:11 localhost kernel: Key type big_key registered
Jan 22 12:49:11 localhost kernel: Key type encrypted registered
Jan 22 12:49:11 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 12:49:11 localhost kernel: Loading compiled-in module X.509 certificates
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 12:49:11 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 22 12:49:11 localhost kernel: ima: No architecture policies found
Jan 22 12:49:11 localhost kernel: evm: Initialising EVM extended attributes:
Jan 22 12:49:11 localhost kernel: evm: security.selinux
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64 (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.apparmor (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.ima
Jan 22 12:49:11 localhost kernel: evm: security.capability
Jan 22 12:49:11 localhost kernel: evm: HMAC attrs: 0x1
Jan 22 12:49:11 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 12:49:11 localhost kernel: Running certificate verification RSA selftest
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 12:49:11 localhost kernel: Running certificate verification ECDSA selftest
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 12:49:11 localhost kernel: clk: Disabling unused clocks
Jan 22 12:49:11 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 22 12:49:11 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 12:49:11 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 22 12:49:11 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 12:49:11 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 12:49:11 localhost kernel: Run /init as init process
Jan 22 12:49:11 localhost kernel:   with arguments:
Jan 22 12:49:11 localhost kernel:     /init
Jan 22 12:49:11 localhost kernel:   with environment:
Jan 22 12:49:11 localhost kernel:     HOME=/
Jan 22 12:49:11 localhost kernel:     TERM=linux
Jan 22 12:49:11 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 22 12:49:11 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 12:49:11 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 12:49:11 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 12:49:11 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 22 12:49:11 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 22 12:49:11 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 12:49:11 localhost systemd[1]: Detected virtualization kvm.
Jan 22 12:49:11 localhost systemd[1]: Detected architecture x86-64.
Jan 22 12:49:11 localhost systemd[1]: Running in initrd.
Jan 22 12:49:11 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 12:49:11 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 22 12:49:11 localhost systemd[1]: No hostname configured, using default hostname.
Jan 22 12:49:11 localhost systemd[1]: Hostname set to <localhost>.
Jan 22 12:49:11 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 22 12:49:11 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 22 12:49:11 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:11 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 22 12:49:11 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 22 12:49:11 localhost systemd[1]: Reached target Local File Systems.
Jan 22 12:49:11 localhost systemd[1]: Reached target Path Units.
Jan 22 12:49:11 localhost systemd[1]: Reached target Slice Units.
Jan 22 12:49:11 localhost systemd[1]: Reached target Swaps.
Jan 22 12:49:11 localhost systemd[1]: Reached target Timer Units.
Jan 22 12:49:11 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 22 12:49:11 localhost systemd[1]: Listening on Journal Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on udev Control Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 22 12:49:11 localhost systemd[1]: Reached target Socket Units.
Jan 22 12:49:11 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 22 12:49:11 localhost systemd[1]: Starting Journal Service...
Jan 22 12:49:11 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 12:49:11 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 22 12:49:11 localhost systemd[1]: Starting Create System Users...
Jan 22 12:49:11 localhost systemd[1]: Starting Setup Virtual Console...
Jan 22 12:49:11 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 12:49:11 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 22 12:49:11 localhost systemd[1]: Finished Create System Users.
Jan 22 12:49:11 localhost systemd-journald[307]: Journal started
Jan 22 12:49:11 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/2198fae51aa3494083f6677ed40734bb) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:11 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 22 12:49:11 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 22 12:49:11 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 12:49:11 localhost systemd[1]: Started Journal Service.
Jan 22 12:49:11 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 12:49:11 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 12:49:11 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 12:49:11 localhost systemd[1]: Finished Setup Virtual Console.
Jan 22 12:49:11 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 12:49:11 localhost systemd[1]: Starting dracut cmdline hook...
Jan 22 12:49:11 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 12:49:11 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 12:49:11 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost systemd[1]: Finished dracut cmdline hook.
Jan 22 12:49:11 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 22 12:49:11 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 12:49:11 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 22 12:49:11 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 12:49:11 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered udp transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 12:49:11 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 22 12:49:11 localhost rpc.statd[443]: Initializing NSM state
Jan 22 12:49:11 localhost rpc.idmapd[448]: Setting log level to 0
Jan 22 12:49:11 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 22 12:49:11 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 12:49:11 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 12:49:11 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 12:49:11 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 22 12:49:11 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 22 12:49:11 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 22 12:49:11 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 22 12:49:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:12 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 22 12:49:12 localhost kernel: libata version 3.00 loaded.
Jan 22 12:49:12 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 22 12:49:12 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 12:49:12 localhost systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:49:12 localhost kernel: scsi host0: ata_piix
Jan 22 12:49:12 localhost kernel: scsi host1: ata_piix
Jan 22 12:49:12 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 22 12:49:12 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 22 12:49:12 localhost kernel:  vda: vda1
Jan 22 12:49:12 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 22 12:49:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 22 12:49:12 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 22 12:49:12 localhost systemd[1]: Reached target System Initialization.
Jan 22 12:49:12 localhost systemd[1]: Reached target Basic System.
Jan 22 12:49:12 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 12:49:12 localhost systemd[1]: Reached target Network.
Jan 22 12:49:12 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 12:49:12 localhost systemd[1]: Starting dracut initqueue hook...
Jan 22 12:49:12 localhost kernel: ata1: found unknown device (class 0)
Jan 22 12:49:12 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 12:49:12 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 12:49:12 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 12:49:12 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 12:49:12 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 12:49:12 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 12:49:12 localhost systemd[1]: Reached target Initrd Root Device.
Jan 22 12:49:12 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 22 12:49:12 localhost systemd[1]: Finished dracut initqueue hook.
Jan 22 12:49:12 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 12:49:12 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 12:49:12 localhost systemd[1]: Reached target Remote File Systems.
Jan 22 12:49:12 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 22 12:49:12 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 22 12:49:12 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 12:49:12 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 12:49:12 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 12:49:12 localhost systemd[1]: Mounting /sysroot...
Jan 22 12:49:13 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 12:49:13 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 12:49:13 localhost kernel: XFS (vda1): Ending clean mount
Jan 22 12:49:13 localhost systemd[1]: Mounted /sysroot.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd Root File System.
Jan 22 12:49:13 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 12:49:13 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd File Systems.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd Default Target.
Jan 22 12:49:13 localhost systemd[1]: Starting dracut mount hook...
Jan 22 12:49:13 localhost systemd[1]: Finished dracut mount hook.
Jan 22 12:49:13 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 12:49:13 localhost rpc.idmapd[448]: exiting on signal 15
Jan 22 12:49:13 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 12:49:13 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 12:49:13 localhost systemd[1]: Stopped target Network.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Timer Units.
Jan 22 12:49:13 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Basic System.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Path Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Remote File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Slice Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Socket Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target System Initialization.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Local File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Swaps.
Jan 22 12:49:13 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut mount hook.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 12:49:13 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:13 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 22 12:49:13 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 22 12:49:13 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 12:49:13 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 12:49:13 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 12:49:13 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed udev Control Socket.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed udev Kernel Socket.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 22 12:49:13 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 22 12:49:13 localhost systemd[1]: Starting Cleanup udev Database...
Jan 22 12:49:13 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 12:49:13 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 12:49:13 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create System Users.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Cleanup udev Database.
Jan 22 12:49:13 localhost systemd[1]: Reached target Switch Root.
Jan 22 12:49:13 localhost systemd[1]: Starting Switch Root...
Jan 22 12:49:13 localhost systemd[1]: Switching root.
Jan 22 12:49:13 localhost systemd-journald[307]: Journal stopped
Jan 22 12:49:14 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Jan 22 12:49:14 localhost kernel: audit: type=1404 audit(1769086153.477:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability open_perms=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 12:49:14 localhost kernel: audit: type=1403 audit(1769086153.635:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 12:49:14 localhost systemd[1]: Successfully loaded SELinux policy in 161.110ms.
Jan 22 12:49:14 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.027ms.
Jan 22 12:49:14 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 12:49:14 localhost systemd[1]: Detected virtualization kvm.
Jan 22 12:49:14 localhost systemd[1]: Detected architecture x86-64.
Jan 22 12:49:14 localhost systemd-rc-local-generator[641]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 12:49:14 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Stopped Switch Root.
Jan 22 12:49:14 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/getty.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 22 12:49:14 localhost systemd[1]: Created slice User and Session Slice.
Jan 22 12:49:14 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:14 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 22 12:49:14 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Switch Root.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 22 12:49:14 localhost systemd[1]: Reached target Path Units.
Jan 22 12:49:14 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 22 12:49:14 localhost systemd[1]: Reached target Slice Units.
Jan 22 12:49:14 localhost systemd[1]: Reached target Swaps.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 22 12:49:14 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 22 12:49:14 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 22 12:49:14 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 22 12:49:14 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 22 12:49:14 localhost systemd[1]: Listening on udev Control Socket.
Jan 22 12:49:14 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 22 12:49:14 localhost systemd[1]: Mounting Huge Pages File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 22 12:49:14 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 12:49:14 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 22 12:49:14 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 12:49:14 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 22 12:49:14 localhost systemd[1]: Stopped Journal Service.
Jan 22 12:49:14 localhost systemd[1]: Starting Journal Service...
Jan 22 12:49:14 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 22 12:49:14 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:14 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 22 12:49:14 localhost kernel: fuse: init (API version 7.37)
Jan 22 12:49:14 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 22 12:49:14 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 22 12:49:14 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 12:49:14 localhost systemd[1]: Mounted Huge Pages File System.
Jan 22 12:49:14 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 22 12:49:14 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 22 12:49:14 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 22 12:49:14 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 12:49:14 localhost systemd-journald[682]: Journal started
Jan 22 12:49:14 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:13 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 22 12:49:13 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Started Journal Service.
Jan 22 12:49:14 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:14 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 12:49:14 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 22 12:49:14 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 12:49:14 localhost kernel: ACPI: bus type drm_connector registered
Jan 22 12:49:14 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 12:49:14 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 22 12:49:14 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 12:49:14 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 22 12:49:14 localhost systemd[1]: Mounting FUSE Control File System...
Jan 22 12:49:14 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 22 12:49:14 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 12:49:14 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 12:49:14 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 12:49:14 localhost systemd[1]: Starting Create System Users...
Jan 22 12:49:14 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:14 localhost systemd-journald[682]: Received client request to flush runtime journal.
Jan 22 12:49:14 localhost systemd[1]: Mounted FUSE Control File System.
Jan 22 12:49:14 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 12:49:14 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 12:49:14 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 12:49:14 localhost systemd[1]: Finished Create System Users.
Jan 22 12:49:14 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 12:49:14 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 22 12:49:14 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 12:49:14 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local File Systems.
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 12:49:14 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 12:49:14 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 12:49:14 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 12:49:14 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 12:49:14 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 22 12:49:14 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 12:49:14 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 12:49:14 localhost systemd[1]: Starting Security Auditing Service...
Jan 22 12:49:14 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 12:49:14 localhost systemd[1]: Starting RPC Bind...
Jan 22 12:49:14 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 12:49:14 localhost systemd[1]: Started RPC Bind.
Jan 22 12:49:14 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 12:49:14 localhost augenrules[710]: /sbin/augenrules: No change
Jan 22 12:49:14 localhost augenrules[725]: No rules
Jan 22 12:49:14 localhost augenrules[725]: enabled 1
Jan 22 12:49:14 localhost augenrules[725]: failure 1
Jan 22 12:49:14 localhost augenrules[725]: pid 704
Jan 22 12:49:14 localhost augenrules[725]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[725]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[725]: lost 0
Jan 22 12:49:14 localhost augenrules[725]: backlog 3
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost augenrules[725]: enabled 1
Jan 22 12:49:14 localhost augenrules[725]: failure 1
Jan 22 12:49:14 localhost augenrules[725]: pid 704
Jan 22 12:49:14 localhost augenrules[725]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[725]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[725]: lost 0
Jan 22 12:49:14 localhost augenrules[725]: backlog 2
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost augenrules[725]: enabled 1
Jan 22 12:49:14 localhost augenrules[725]: failure 1
Jan 22 12:49:14 localhost augenrules[725]: pid 704
Jan 22 12:49:14 localhost augenrules[725]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[725]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[725]: lost 0
Jan 22 12:49:14 localhost augenrules[725]: backlog 2
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost systemd[1]: Started Security Auditing Service.
Jan 22 12:49:14 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 12:49:14 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 12:49:14 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 12:49:15 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 22 12:49:15 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 12:49:15 localhost systemd[1]: Starting Update is Completed...
Jan 22 12:49:15 localhost systemd[1]: Finished Update is Completed.
Jan 22 12:49:15 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 12:49:15 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 12:49:15 localhost systemd[1]: Reached target System Initialization.
Jan 22 12:49:15 localhost systemd[1]: Started dnf makecache --timer.
Jan 22 12:49:15 localhost systemd[1]: Started Daily rotation of log files.
Jan 22 12:49:15 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 12:49:15 localhost systemd[1]: Reached target Timer Units.
Jan 22 12:49:15 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 12:49:15 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 12:49:15 localhost systemd[1]: Reached target Socket Units.
Jan 22 12:49:15 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 22 12:49:15 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:15 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:15 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 12:49:15 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:15 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:15 localhost systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:49:15 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 22 12:49:15 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 12:49:15 localhost systemd[1]: Reached target Basic System.
Jan 22 12:49:15 localhost dbus-broker-lau[758]: Ready
Jan 22 12:49:15 localhost systemd[1]: Starting NTP client/server...
Jan 22 12:49:15 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 12:49:15 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 12:49:15 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 12:49:15 localhost systemd[1]: Started irqbalance daemon.
Jan 22 12:49:15 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 22 12:49:15 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 12:49:15 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 22 12:49:15 localhost systemd[1]: Starting User Login Management...
Jan 22 12:49:15 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 12:49:15 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 22 12:49:15 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 12:49:15 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 12:49:15 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 12:49:15 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 12:49:15 localhost systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 12:49:15 localhost systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 12:49:15 localhost systemd-logind[787]: New seat seat0.
Jan 22 12:49:15 localhost systemd[1]: Started User Login Management.
Jan 22 12:49:15 localhost chronyd[807]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 12:49:15 localhost chronyd[807]: Loaded 0 symmetric keys
Jan 22 12:49:15 localhost chronyd[807]: Using right/UTC timezone to obtain leap second data
Jan 22 12:49:15 localhost chronyd[807]: Loaded seccomp filter (level 2)
Jan 22 12:49:15 localhost systemd[1]: Started NTP client/server.
Jan 22 12:49:15 localhost kernel: kvm_amd: TSC scaling supported
Jan 22 12:49:15 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 22 12:49:15 localhost kernel: kvm_amd: Nested Paging enabled
Jan 22 12:49:15 localhost kernel: kvm_amd: LBR virtualization supported
Jan 22 12:49:15 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 22 12:49:15 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 22 12:49:15 localhost kernel: Console: switching to colour dummy device 80x25
Jan 22 12:49:15 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 12:49:15 localhost kernel: [drm] features: -context_init
Jan 22 12:49:15 localhost kernel: [drm] number of scanouts: 1
Jan 22 12:49:15 localhost kernel: [drm] number of cap sets: 0
Jan 22 12:49:15 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 22 12:49:15 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 12:49:15 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 22 12:49:15 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 12:49:15 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Jan 22 12:49:15 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 12:49:16 localhost cloud-init[842]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 12:49:16 +0000. Up 8.96 seconds.
Jan 22 12:49:16 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 22 12:49:16 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 22 12:49:16 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp3xfq58m1.mount: Deactivated successfully.
Jan 22 12:49:16 localhost systemd[1]: Starting Hostname Service...
Jan 22 12:49:16 localhost systemd[1]: Started Hostname Service.
Jan 22 12:49:16 np0005592158.novalocal systemd-hostnamed[856]: Hostname set to <np0005592158.novalocal> (static)
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Reached target Preparation for Network.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Starting Network Manager...
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1710] NetworkManager (version 1.54.3-2.el9) is starting... (boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1715] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1789] manager[0x558df8b9f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1825] hostname: hostname: using hostnamed
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1826] hostname: static hostname changed from (none) to "np0005592158.novalocal"
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1830] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1959] manager[0x558df8b9f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.1960] manager[0x558df8b9f000]: rfkill: WWAN hardware radio set enabled
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2005] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2006] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2007] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2008] manager: Networking is enabled by state file
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2010] settings: Loaded settings plugin: keyfile (internal)
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2020] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2041] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2054] dhcp: init: Using DHCP client 'internal'
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2057] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2072] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2080] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2090] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2101] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2105] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2134] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2139] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2142] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2144] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2146] device (eth0): carrier: link connected
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2150] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2158] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2164] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2168] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2169] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2172] manager: NetworkManager state is now CONNECTING
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2174] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2184] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2188] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2219] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2227] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2248] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Started Network Manager.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Reached target Network.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2484] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2487] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2488] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2491] device (lo): Activation: successful, device activated.
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2495] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2498] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2500] device (eth0): Activation: successful, device activated.
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2507] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 12:49:17 np0005592158.novalocal NetworkManager[860]: <info>  [1769086157.2510] manager: startup complete
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Reached target NFS client services.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Reached target Remote File Systems.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 22 12:49:17 np0005592158.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 12:49:17 +0000. Up 9.93 seconds.
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.119         | 255.255.255.0 | global | fa:16:3e:78:47:38 |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe78:4738/64 |       .       |  link  | fa:16:3e:78:47:38 |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 22 12:49:17 np0005592158.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Jan 22 12:49:18 np0005592158.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Generating public/private rsa key pair.
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key fingerprint is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: SHA256:tLLf3sAR7bmWyT372+gP1nZboxLQL9EoKjKfZihVlQc root@np0005592158.novalocal
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key's randomart image is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +---[RSA 3072]----+
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |       Eo        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |       o . .     |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |      . o o +    |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |     . . + * o   |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |    . . S + =    |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |   + . + . = * . |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |  . = +   o O =.=|
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: | . . = . . = ..**|
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |  . o   ..o oo+==|
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key fingerprint is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: SHA256:bpFA6WEyFisKNlKIhWEOIxmhWQS1WLWXIaTPX+Qn+zM root@np0005592158.novalocal
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key's randomart image is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +---[ECDSA 256]---+
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |X&Bo=.o.         |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |@B o+=+o         |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |*++.o=+..        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |+..+ ..+ .       |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |.   o   S .      |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |     . o =       |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |      . +        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |       . .E      |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |          .o     |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key fingerprint is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: SHA256:kIVDHGW7cjeKUDtMzNO7kqs8AhPnLEgyrNcxjPv380o root@np0005592158.novalocal
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: The key's randomart image is:
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +--[ED25519 256]--+
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |     oo++        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |     o+= .       |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |.  o  O.o        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |+oo ++ + o       |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |=* o.o= S o      |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |* = .. * + .     |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: | = .  + E        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |  ..o .+.        |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: |   .o+..o+.      |
Jan 22 12:49:19 np0005592158.novalocal cloud-init[923]: +----[SHA256]-----+
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Reached target Network is Online.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting System Logging Service...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 22 12:49:19 np0005592158.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Permit User Sessions...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 22 12:49:19 np0005592158.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 22 12:49:19 np0005592158.novalocal sshd[1008]: Server listening on :: port 22.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Finished Permit User Sessions.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started Command Scheduler.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started Getty on tty1.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 22 12:49:19 np0005592158.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 22 12:49:19 np0005592158.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Reached target Login Prompts.
Jan 22 12:49:19 np0005592158.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 79% if used.)
Jan 22 12:49:19 np0005592158.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 22 12:49:19 np0005592158.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 22 12:49:19 np0005592158.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Started System Logging Service.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Reached target Multi-User System.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 12:49:19 np0005592158.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 12:49:19 np0005592158.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 22 12:49:19 np0005592158.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 12:49:19 np0005592158.novalocal cloud-init[1135]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 12:49:19 +0000. Up 11.86 seconds.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 12:49:19 np0005592158.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1221]: Unable to negotiate with 38.102.83.114 port 56308: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1235]: Unable to negotiate with 38.102.83.114 port 56318: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1245]: Unable to negotiate with 38.102.83.114 port 56330: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1207]: Connection closed by 38.102.83.114 port 56296 [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1256]: Connection closed by 38.102.83.114 port 56338 [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1273]: Connection reset by 38.102.83.114 port 56350 [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1224]: Connection closed by 38.102.83.114 port 56314 [preauth]
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1281]: Unable to negotiate with 38.102.83.114 port 56354: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 22 12:49:19 np0005592158.novalocal dracut[1283]: dracut-057-102.git20250818.el9
Jan 22 12:49:19 np0005592158.novalocal sshd-session[1285]: Unable to negotiate with 38.102.83.114 port 56366: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1304]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 12:49:20 +0000. Up 12.35 seconds.
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1333]: #############################################################
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1337]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1346]: 256 SHA256:bpFA6WEyFisKNlKIhWEOIxmhWQS1WLWXIaTPX+Qn+zM root@np0005592158.novalocal (ECDSA)
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1354]: 256 SHA256:kIVDHGW7cjeKUDtMzNO7kqs8AhPnLEgyrNcxjPv380o root@np0005592158.novalocal (ED25519)
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1361]: 3072 SHA256:tLLf3sAR7bmWyT372+gP1nZboxLQL9EoKjKfZihVlQc root@np0005592158.novalocal (RSA)
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1364]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1366]: #############################################################
Jan 22 12:49:20 np0005592158.novalocal cloud-init[1304]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 12:49:20 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.55 seconds
Jan 22 12:49:20 np0005592158.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 12:49:20 np0005592158.novalocal systemd[1]: Reached target Cloud-init target.
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 12:49:20 np0005592158.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: memstrack is not available
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: memstrack is not available
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: *** Including module: systemd ***
Jan 22 12:49:21 np0005592158.novalocal dracut[1287]: *** Including module: fips ***
Jan 22 12:49:22 np0005592158.novalocal chronyd[807]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Jan 22 12:49:22 np0005592158.novalocal chronyd[807]: System clock TAI offset set to 37 seconds
Jan 22 12:49:22 np0005592158.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Jan 22 12:49:22 np0005592158.novalocal dracut[1287]: *** Including module: i18n ***
Jan 22 12:49:22 np0005592158.novalocal dracut[1287]: *** Including module: drm ***
Jan 22 12:49:22 np0005592158.novalocal dracut[1287]: *** Including module: prefixdevname ***
Jan 22 12:49:22 np0005592158.novalocal dracut[1287]: *** Including module: kernel-modules ***
Jan 22 12:49:23 np0005592158.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: qemu ***
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: fstab-sys ***
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: rootfs-block ***
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: terminfo ***
Jan 22 12:49:23 np0005592158.novalocal dracut[1287]: *** Including module: udev-rules ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: virtiofs ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: usrmount ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: base ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: fs-lib ***
Jan 22 12:49:24 np0005592158.novalocal dracut[1287]: *** Including module: kdumpbase ***
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 35 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 25 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 33 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 34 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 32 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 22 12:49:25 np0005592158.novalocal irqbalance[785]: IRQ 30 affinity is now unmanaged
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]: *** Including module: openssl ***
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]: *** Including module: shutdown ***
Jan 22 12:49:25 np0005592158.novalocal dracut[1287]: *** Including module: squash ***
Jan 22 12:49:26 np0005592158.novalocal dracut[1287]: *** Including modules done ***
Jan 22 12:49:26 np0005592158.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Jan 22 12:49:26 np0005592158.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Jan 22 12:49:26 np0005592158.novalocal dracut[1287]: *** Resolving executable dependencies ***
Jan 22 12:49:27 np0005592158.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: *** Store current command line parameters ***
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: Stored kernel commandline:
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 22 12:49:28 np0005592158.novalocal dracut[1287]: *** Install squash loader ***
Jan 22 12:49:29 np0005592158.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: *** Hardlinking files ***
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Mode:           real
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Files:          50
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Linked:         0 files
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Compared:       0 xattrs
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Compared:       0 files
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Saved:          0 B
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: Duration:       0.000478 seconds
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: *** Hardlinking files done ***
Jan 22 12:49:30 np0005592158.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 12:49:32 np0005592158.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 22 12:49:32 np0005592158.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 22 12:49:32 np0005592158.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 22 12:49:32 np0005592158.novalocal systemd[1]: Startup finished in 3.221s (kernel) + 2.593s (initrd) + 18.765s (userspace) = 24.580s.
Jan 22 12:49:47 np0005592158.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 12:50:30 np0005592158.novalocal sshd-session[4306]: Accepted publickey for zuul from 38.102.83.114 port 34448 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 12:50:30 np0005592158.novalocal systemd-logind[787]: New session 1 of user zuul.
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Queued start job for default target Main User Target.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Created slice User Application Slice.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Reached target Paths.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Reached target Timers.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Starting D-Bus User Message Bus Socket...
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Starting Create User's Volatile Files and Directories...
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Finished Create User's Volatile Files and Directories.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Listening on D-Bus User Message Bus Socket.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Reached target Sockets.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Reached target Basic System.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Reached target Main User Target.
Jan 22 12:50:30 np0005592158.novalocal systemd[4310]: Startup finished in 109ms.
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 22 12:50:30 np0005592158.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 22 12:50:30 np0005592158.novalocal sshd-session[4306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:50:31 np0005592158.novalocal python3[4392]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:34 np0005592158.novalocal python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:41 np0005592158.novalocal python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:42 np0005592158.novalocal python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 12:50:44 np0005592158.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1DCoRB3r0Iy6aGg4LRzpWVb+uDCW+ivahM6mnwYTzs7NyJlgPrnZ6PV7GhjThi3qMi3wdL9+LpBaBPuOhI+k1w3f1FS+zKP3/xb59Ck+AhF8LIp3InS3sgWlvIGvXYvlwuN3aBMHp/hbvFOtbZFxgXhvIlVsk+m1K/J/50vtBBzyri7EjoTWDvY18FZoapjDeqss1t7AvCXVAcsVOfZsyssdWALG/AlGcmeZ9kZ/yza1tS0t7avldh0ZazNkLg/5jp3HQrTFLiETLQx8tBjdEj0Pme6UqjG17uVJkEVl4g3FLGiT4krCLRjW0sA3E3rd5e1m4tBIoSSqoqN2E+V9ctp/6T9Vpe3OcZdgKBUE9yz4tlHgQLxksFY2SiXEQYiWTctsRY30EsMJk2Qg65Fyp/ts6u4u66Uo27jNRB+ZD/vnAY4IKu94a2+6uIW/9oShh4f1cWrBlFzxXaUBj4KHar7HFljsOCavs7NCPccp7JoW8FoXONrfM+rhSgDbeDGE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:45 np0005592158.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:45 np0005592158.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:46 np0005592158.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086245.4954143-252-122386673280691/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa follow=False checksum=9eec2026f94d681755d58aa430eaf5c6b319017b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:46 np0005592158.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:47 np0005592158.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086246.458629-307-211699865187514/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa.pub follow=False checksum=f8a39b98331ab3302b65dacd0b8176268aaf7e5b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:49 np0005592158.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 22 12:50:50 np0005592158.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:52 np0005592158.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 12:50:53 np0005592158.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592158.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592158.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592158.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:55 np0005592158.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:55 np0005592158.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:57 np0005592158.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkiiyvughzuigbfihwrfcpgtwbijtzmi ; /usr/bin/python3'
Jan 22 12:50:57 np0005592158.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:57 np0005592158.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:57 np0005592158.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:57 np0005592158.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplxlqriruanzumdsnqeirafcdognwhv ; /usr/bin/python3'
Jan 22 12:50:57 np0005592158.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:58 np0005592158.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:58 np0005592158.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:58 np0005592158.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivyrwyrkcfnidhtiokwpaakbthtasnr ; /usr/bin/python3'
Jan 22 12:50:58 np0005592158.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:58 np0005592158.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086257.588877-32-83591313562921/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:58 np0005592158.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:59 np0005592158.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:59 np0005592158.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:59 np0005592158.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592158.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592158.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592158.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592158.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592158.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592158.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592158.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592158.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592158.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592158.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592158.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592158.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592158.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592158.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592158.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592158.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592158.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592158.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592158.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592158.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592158.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:06 np0005592158.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:06 np0005592158.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:08 np0005592158.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gypjnjgmvtqgnvwhhvsvjqvupdptxdpl ; /usr/bin/python3'
Jan 22 12:51:08 np0005592158.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:08 np0005592158.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 12:51:08 np0005592158.novalocal systemd[1]: Starting Time & Date Service...
Jan 22 12:51:09 np0005592158.novalocal systemd[1]: Started Time & Date Service.
Jan 22 12:51:09 np0005592158.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 22 12:51:09 np0005592158.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:09 np0005592158.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizysbdoxudbrnrmylywlbkiejwxqayd ; /usr/bin/python3'
Jan 22 12:51:09 np0005592158.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:09 np0005592158.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:09 np0005592158.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:09 np0005592158.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:10 np0005592158.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769086269.7049272-252-2810044716117/source _original_basename=tmp6jielptk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:11 np0005592158.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:11 np0005592158.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086271.0891168-303-140319692310831/source _original_basename=tmp2c52kqcr follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:12 np0005592158.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lytmrpsikjrxhayfgaixlzpgfclsbnee ; /usr/bin/python3'
Jan 22 12:51:12 np0005592158.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:12 np0005592158.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:12 np0005592158.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:12 np0005592158.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmtvkezyonrwailypgdbdhovyczkzghx ; /usr/bin/python3'
Jan 22 12:51:12 np0005592158.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:13 np0005592158.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086272.4680164-382-277558674465974/source _original_basename=tmpsihnrey6 follow=False checksum=cb6c1a5f96f80c368134b306cfb8a4ce10f90c11 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:13 np0005592158.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:13 np0005592158.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:14 np0005592158.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:14 np0005592158.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaqnhkblnxokgopcoxabrrzhpolnmhvi ; /usr/bin/python3'
Jan 22 12:51:14 np0005592158.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:14 np0005592158.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:14 np0005592158.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:14 np0005592158.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funsjibvlzgjjucnhahqyoetcjfxkhcy ; /usr/bin/python3'
Jan 22 12:51:14 np0005592158.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:15 np0005592158.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086274.37694-452-135495289471916/source _original_basename=tmpe2h_oqjp follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:15 np0005592158.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:15 np0005592158.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fusaoauncvchclyvqlswasyykprcvxfq ; /usr/bin/python3'
Jan 22 12:51:15 np0005592158.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:15 np0005592158.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-37d2-1cc7-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:15 np0005592158.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:16 np0005592158.novalocal python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-37d2-1cc7-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 12:51:18 np0005592158.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:25 np0005592158.novalocal irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 22 12:51:25 np0005592158.novalocal irqbalance[785]: IRQ 27 affinity is now unmanaged
Jan 22 12:51:39 np0005592158.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 12:51:42 np0005592158.novalocal sudo[6949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlvlhviebccucbhirlrfvdkiwguntuq ; /usr/bin/python3'
Jan 22 12:51:42 np0005592158.novalocal sudo[6949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:42 np0005592158.novalocal python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:42 np0005592158.novalocal sudo[6949]: pam_unix(sudo:session): session closed for user root
Jan 22 12:52:42 np0005592158.novalocal sshd-session[4319]: Received disconnect from 38.102.83.114 port 34448:11: disconnected by user
Jan 22 12:52:42 np0005592158.novalocal sshd-session[4319]: Disconnected from user zuul 38.102.83.114 port 34448
Jan 22 12:52:42 np0005592158.novalocal sshd-session[4306]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:52:42 np0005592158.novalocal systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 22 12:52:53 np0005592158.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 22 12:52:53 np0005592158.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8464] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 12:52:53 np0005592158.novalocal systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8608] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8641] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8643] device (eth1): carrier: link connected
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8645] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8651] policy: auto-activating connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8654] device (eth1): Activation: starting connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8655] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8657] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8660] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:52:53 np0005592158.novalocal NetworkManager[860]: <info>  [1769086373.8665] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:52:53 np0005592158.novalocal systemd[4310]: Starting Mark boot as successful...
Jan 22 12:52:53 np0005592158.novalocal systemd[4310]: Finished Mark boot as successful.
Jan 22 12:52:55 np0005592158.novalocal sshd-session[6957]: Accepted publickey for zuul from 38.102.83.114 port 41024 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 12:52:55 np0005592158.novalocal systemd-logind[787]: New session 3 of user zuul.
Jan 22 12:52:55 np0005592158.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 22 12:52:55 np0005592158.novalocal sshd-session[6957]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:52:56 np0005592158.novalocal python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-97dc-dff7-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:53:05 np0005592158.novalocal sudo[7062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iquypxcgibumnjzqoicshkvgwpdxbowq ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:05 np0005592158.novalocal sudo[7062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:06 np0005592158.novalocal python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:53:06 np0005592158.novalocal sudo[7062]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:06 np0005592158.novalocal sudo[7135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eigmaxjcelpdbvxbnpffilcdlcrtzbsa ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:06 np0005592158.novalocal sudo[7135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:06 np0005592158.novalocal python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086385.7730064-155-104606407793157/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=66961519467b8831ba0c243060d8ab522bdd948e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:53:06 np0005592158.novalocal sudo[7135]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:06 np0005592158.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpizllfutgxlnsvxiivvufuvjhaesyi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:06 np0005592158.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:06 np0005592158.novalocal python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 12:53:06 np0005592158.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 12:53:06 np0005592158.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 22 12:53:06 np0005592158.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 22 12:53:06 np0005592158.novalocal systemd[1]: Stopping Network Manager...
Jan 22 12:53:06 np0005592158.novalocal NetworkManager[860]: <info>  [1769086386.9985] caught SIGTERM, shutting down normally.
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0004] dhcp4 (eth0): canceled DHCP transaction
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0004] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0005] dhcp4 (eth0): state changed no lease
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0009] manager: NetworkManager state is now CONNECTING
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0107] dhcp4 (eth1): canceled DHCP transaction
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0107] dhcp4 (eth1): state changed no lease
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[860]: <info>  [1769086387.0177] exiting (success)
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Stopped Network Manager.
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: NetworkManager.service: Consumed 1.626s CPU time, 10.5M memory peak.
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Starting Network Manager...
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.0726] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.0728] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.0780] manager[0x558895620000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Starting Hostname Service...
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Started Hostname Service.
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1535] hostname: hostname: using hostnamed
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1536] hostname: static hostname changed from (none) to "np0005592158.novalocal"
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1542] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1547] manager[0x558895620000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1548] manager[0x558895620000]: rfkill: WWAN hardware radio set enabled
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1584] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1585] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1585] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1587] manager: Networking is enabled by state file
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1589] settings: Loaded settings plugin: keyfile (internal)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1595] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1627] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1641] dhcp: init: Using DHCP client 'internal'
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1648] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1656] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1664] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1676] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1687] device (eth0): carrier: link connected
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1693] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1701] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1702] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1710] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1721] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1730] device (eth1): carrier: link connected
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1735] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1743] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571) (indicated)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1743] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1750] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1760] device (eth1): Activation: starting connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Started Network Manager.
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1769] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1776] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1780] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1792] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1795] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1799] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1802] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1804] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1807] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1813] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1818] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1828] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1830] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1846] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1850] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1854] device (lo): Activation: successful, device activated.
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1888] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1894] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 12:53:07 np0005592158.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1948] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1963] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1964] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1967] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1969] device (eth0): Activation: successful, device activated.
Jan 22 12:53:07 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086387.1973] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 12:53:07 np0005592158.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:07 np0005592158.novalocal python3[7271]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-97dc-dff7-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:53:17 np0005592158.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:53:37 np0005592158.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.6613] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 12:53:52 np0005592158.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:53:52 np0005592158.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.6998] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7003] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7019] device (eth1): Activation: successful, device activated.
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7029] manager: startup complete
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7035] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <warn>  [1769086432.7062] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7075] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 12:53:52 np0005592158.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7227] dhcp4 (eth1): canceled DHCP transaction
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7229] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7229] dhcp4 (eth1): state changed no lease
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7251] policy: auto-activating connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7258] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7259] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7263] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7272] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:53:52 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086432.7286] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 12:53:53 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086433.3971] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 12:53:53 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086433.3981] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 12:53:53 np0005592158.novalocal NetworkManager[7197]: <info>  [1769086433.3987] device (eth1): Activation: successful, device activated.
Jan 22 12:54:03 np0005592158.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:54:07 np0005592158.novalocal sshd-session[6960]: Received disconnect from 38.102.83.114 port 41024:11: disconnected by user
Jan 22 12:54:07 np0005592158.novalocal sshd-session[6960]: Disconnected from user zuul 38.102.83.114 port 41024
Jan 22 12:54:07 np0005592158.novalocal sshd-session[6957]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:54:07 np0005592158.novalocal systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Jan 22 12:54:07 np0005592158.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 12:54:07 np0005592158.novalocal systemd[1]: session-3.scope: Consumed 1.573s CPU time.
Jan 22 12:54:07 np0005592158.novalocal systemd-logind[787]: Removed session 3.
Jan 22 12:54:58 np0005592158.novalocal sshd-session[7300]: Accepted publickey for zuul from 38.102.83.114 port 54078 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 12:54:58 np0005592158.novalocal systemd-logind[787]: New session 4 of user zuul.
Jan 22 12:54:58 np0005592158.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 22 12:54:58 np0005592158.novalocal sshd-session[7300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:54:59 np0005592158.novalocal sudo[7379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnxwpwqquusurbnsejunddxqaouiprpm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:54:59 np0005592158.novalocal sudo[7379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:54:59 np0005592158.novalocal python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:54:59 np0005592158.novalocal sudo[7379]: pam_unix(sudo:session): session closed for user root
Jan 22 12:54:59 np0005592158.novalocal sudo[7452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqczvvfwhjugcgntissjlmeazozdaoz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:54:59 np0005592158.novalocal sudo[7452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:54:59 np0005592158.novalocal python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086498.9099114-373-212722702516492/source _original_basename=tmpo0m84ckm follow=False checksum=5e7e0974f47bfd675c68ead6f6109233c4c9d481 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:54:59 np0005592158.novalocal sudo[7452]: pam_unix(sudo:session): session closed for user root
Jan 22 12:55:02 np0005592158.novalocal sshd-session[7303]: Connection closed by 38.102.83.114 port 54078
Jan 22 12:55:02 np0005592158.novalocal sshd-session[7300]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:55:02 np0005592158.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 12:55:02 np0005592158.novalocal systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Jan 22 12:55:02 np0005592158.novalocal systemd-logind[787]: Removed session 4.
Jan 22 12:56:29 np0005592158.novalocal systemd[4310]: Created slice User Background Tasks Slice.
Jan 22 12:56:30 np0005592158.novalocal systemd[4310]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 12:56:30 np0005592158.novalocal systemd[4310]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 13:00:10 np0005592158.novalocal sshd-session[7484]: Accepted publickey for zuul from 38.102.83.114 port 44480 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:00:10 np0005592158.novalocal systemd-logind[787]: New session 5 of user zuul.
Jan 22 13:00:10 np0005592158.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 22 13:00:10 np0005592158.novalocal sshd-session[7484]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:00:10 np0005592158.novalocal sudo[7511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boznpprpfkpfikvvlmagfwprkmdfhauq ; /usr/bin/python3'
Jan 22 13:00:10 np0005592158.novalocal sudo[7511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:10 np0005592158.novalocal python3[7513]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca0-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:10 np0005592158.novalocal sudo[7511]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:11 np0005592158.novalocal sudo[7540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqrwcsbacfmgfhevfjvlrmekuvajjwq ; /usr/bin/python3'
Jan 22 13:00:11 np0005592158.novalocal sudo[7540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:11 np0005592158.novalocal python3[7542]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:11 np0005592158.novalocal sudo[7540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:11 np0005592158.novalocal sudo[7566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnvwqqgwhwmaxpgcfntycvgbwpqpikrg ; /usr/bin/python3'
Jan 22 13:00:11 np0005592158.novalocal sudo[7566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592158.novalocal python3[7568]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592158.novalocal sudo[7566]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592158.novalocal sudo[7592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqnrfxnbvmsnhtaouywesesmkiqjokt ; /usr/bin/python3'
Jan 22 13:00:12 np0005592158.novalocal sudo[7592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592158.novalocal python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592158.novalocal sudo[7592]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592158.novalocal sudo[7618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egahzjzfunlpbusovtunhfptlprlvlcr ; /usr/bin/python3'
Jan 22 13:00:12 np0005592158.novalocal sudo[7618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592158.novalocal python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592158.novalocal sudo[7618]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592158.novalocal sudo[7644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfxryfmcmxwxsxysjngnspwwmtlczin ; /usr/bin/python3'
Jan 22 13:00:12 np0005592158.novalocal sudo[7644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592158.novalocal python3[7646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:13 np0005592158.novalocal sudo[7644]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:13 np0005592158.novalocal sudo[7722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xywfrzodqonqpxizezcdkgmtpqqjkafm ; /usr/bin/python3'
Jan 22 13:00:13 np0005592158.novalocal sudo[7722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:13 np0005592158.novalocal python3[7724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:00:13 np0005592158.novalocal sudo[7722]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:13 np0005592158.novalocal sudo[7795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivruugydbsnfmhwmkzlcbsbxwuswwmxp ; /usr/bin/python3'
Jan 22 13:00:13 np0005592158.novalocal sudo[7795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:13 np0005592158.novalocal python3[7797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086813.2072291-363-280776224384526/source _original_basename=tmpxbtn2uca follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:13 np0005592158.novalocal sudo[7795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:14 np0005592158.novalocal sudo[7845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edrvmoyaxoxmzjcavnyoptchgzkyegwr ; /usr/bin/python3'
Jan 22 13:00:14 np0005592158.novalocal sudo[7845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:14 np0005592158.novalocal python3[7847]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:00:15 np0005592158.novalocal systemd[1]: Reloading.
Jan 22 13:00:15 np0005592158.novalocal systemd-rc-local-generator[7871]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:00:15 np0005592158.novalocal sudo[7845]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:16 np0005592158.novalocal sudo[7901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqtjjijwcspkevkjzcloeujhbsfxpwwl ; /usr/bin/python3'
Jan 22 13:00:16 np0005592158.novalocal sudo[7901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:16 np0005592158.novalocal python3[7903]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 13:00:16 np0005592158.novalocal sudo[7901]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:17 np0005592158.novalocal sudo[7927]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblltcbwmubjjsiwucceimnoyupalxkw ; /usr/bin/python3'
Jan 22 13:00:17 np0005592158.novalocal sudo[7927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592158.novalocal python3[7929]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592158.novalocal sudo[7927]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592158.novalocal sudo[7955]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgubviztuuzjvydhaepgqiqgrlpzzted ; /usr/bin/python3'
Jan 22 13:00:18 np0005592158.novalocal sudo[7955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592158.novalocal python3[7957]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592158.novalocal sudo[7955]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592158.novalocal sudo[7983]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cilkavgkhckhujafjnpuidkkjmyemesz ; /usr/bin/python3'
Jan 22 13:00:18 np0005592158.novalocal sudo[7983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592158.novalocal python3[7985]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592158.novalocal sudo[7983]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592158.novalocal sudo[8011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edlgphueqkdexbynrputthtgygkziluh ; /usr/bin/python3'
Jan 22 13:00:18 np0005592158.novalocal sudo[8011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592158.novalocal python3[8013]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592158.novalocal sudo[8011]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:19 np0005592158.novalocal python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca7-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:19 np0005592158.novalocal python3[8070]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 13:00:22 np0005592158.novalocal sshd-session[7487]: Connection closed by 38.102.83.114 port 44480
Jan 22 13:00:22 np0005592158.novalocal sshd-session[7484]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:00:22 np0005592158.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 13:00:22 np0005592158.novalocal systemd[1]: session-5.scope: Consumed 4.334s CPU time.
Jan 22 13:00:22 np0005592158.novalocal systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Jan 22 13:00:22 np0005592158.novalocal systemd-logind[787]: Removed session 5.
Jan 22 13:00:24 np0005592158.novalocal sshd-session[8074]: Accepted publickey for zuul from 38.102.83.114 port 47560 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:00:24 np0005592158.novalocal systemd-logind[787]: New session 6 of user zuul.
Jan 22 13:00:24 np0005592158.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 22 13:00:24 np0005592158.novalocal sshd-session[8074]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:00:24 np0005592158.novalocal sudo[8101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfcihpexfenlvopgyqwbvkkunpfwgvyd ; /usr/bin/python3'
Jan 22 13:00:24 np0005592158.novalocal sudo[8101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:25 np0005592158.novalocal python3[8103]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 13:00:31 np0005592158.novalocal setsebool[8141]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 13:00:31 np0005592158.novalocal setsebool[8141]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:00:45 np0005592158.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:00:55 np0005592158.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:01:01 np0005592158.novalocal CROND[8872]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 13:01:01 np0005592158.novalocal run-parts[8875]: (/etc/cron.hourly) starting 0anacron
Jan 22 13:01:01 np0005592158.novalocal anacron[8883]: Anacron started on 2026-01-22
Jan 22 13:01:01 np0005592158.novalocal anacron[8883]: Will run job `cron.daily' in 19 min.
Jan 22 13:01:01 np0005592158.novalocal anacron[8883]: Will run job `cron.weekly' in 39 min.
Jan 22 13:01:01 np0005592158.novalocal anacron[8883]: Will run job `cron.monthly' in 59 min.
Jan 22 13:01:01 np0005592158.novalocal anacron[8883]: Jobs will be executed sequentially
Jan 22 13:01:01 np0005592158.novalocal run-parts[8885]: (/etc/cron.hourly) finished 0anacron
Jan 22 13:01:01 np0005592158.novalocal CROND[8871]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 13:01:14 np0005592158.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 13:01:14 np0005592158.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:01:14 np0005592158.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:01:14 np0005592158.novalocal systemd[1]: Reloading.
Jan 22 13:01:14 np0005592158.novalocal systemd-rc-local-generator[8927]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:01:15 np0005592158.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:01:17 np0005592158.novalocal sudo[8101]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:25 np0005592158.novalocal python3[15124]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-af35-cd98-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:01:26 np0005592158.novalocal kernel: evm: overlay not supported
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: Starting D-Bus User Message Bus...
Jan 22 13:01:26 np0005592158.novalocal dbus-broker-launch[15772]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 13:01:26 np0005592158.novalocal dbus-broker-launch[15772]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: Started D-Bus User Message Bus.
Jan 22 13:01:26 np0005592158.novalocal dbus-broker-lau[15772]: Ready
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: Created slice Slice /user.
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: podman-15652.scope: unit configures an IP firewall, but not running as root.
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: Started podman-15652.scope.
Jan 22 13:01:26 np0005592158.novalocal systemd[4310]: Started podman-pause-64e70ee2.scope.
Jan 22 13:01:27 np0005592158.novalocal sudo[16115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtaybtuelmqucgshwshdphuzvuwsqht ; /usr/bin/python3'
Jan 22 13:01:27 np0005592158.novalocal sudo[16115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:27 np0005592158.novalocal python3[16125]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.194:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.194:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:01:27 np0005592158.novalocal python3[16125]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 22 13:01:27 np0005592158.novalocal sudo[16115]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:28 np0005592158.novalocal sshd-session[8077]: Connection closed by 38.102.83.114 port 47560
Jan 22 13:01:28 np0005592158.novalocal sshd-session[8074]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:01:28 np0005592158.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 13:01:28 np0005592158.novalocal systemd[1]: session-6.scope: Consumed 48.989s CPU time.
Jan 22 13:01:28 np0005592158.novalocal systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Jan 22 13:01:28 np0005592158.novalocal systemd-logind[787]: Removed session 6.
Jan 22 13:01:49 np0005592158.novalocal sshd-session[23795]: Connection closed by 38.102.83.41 port 58828 [preauth]
Jan 22 13:01:49 np0005592158.novalocal sshd-session[23796]: Unable to negotiate with 38.102.83.41 port 58858: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 22 13:01:49 np0005592158.novalocal sshd-session[23799]: Connection closed by 38.102.83.41 port 58842 [preauth]
Jan 22 13:01:49 np0005592158.novalocal sshd-session[23798]: Unable to negotiate with 38.102.83.41 port 58866: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 22 13:01:49 np0005592158.novalocal sshd-session[23797]: Unable to negotiate with 38.102.83.41 port 58876: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 22 13:01:54 np0005592158.novalocal sshd-session[24809]: Accepted publickey for zuul from 38.102.83.114 port 56936 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:01:54 np0005592158.novalocal systemd-logind[787]: New session 7 of user zuul.
Jan 22 13:01:54 np0005592158.novalocal systemd[1]: Started Session 7 of User zuul.
Jan 22 13:01:54 np0005592158.novalocal sshd-session[24809]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:01:54 np0005592158.novalocal python3[24921]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:54 np0005592158.novalocal sudo[25117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gohlqqygdumahskvbbienavlleddcdle ; /usr/bin/python3'
Jan 22 13:01:54 np0005592158.novalocal sudo[25117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:55 np0005592158.novalocal python3[25127]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:55 np0005592158.novalocal sudo[25117]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:55 np0005592158.novalocal sudo[25491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwtcxehyjnhqlragkqsowueoffeletl ; /usr/bin/python3'
Jan 22 13:01:55 np0005592158.novalocal sudo[25491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:55 np0005592158.novalocal python3[25501]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005592158.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 13:01:56 np0005592158.novalocal useradd[25581]: new group: name=cloud-admin, GID=1002
Jan 22 13:01:56 np0005592158.novalocal useradd[25581]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 22 13:01:56 np0005592158.novalocal sudo[25491]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:59 np0005592158.novalocal sudo[26942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsexifhcbxxpqyghlbbnztyxghhkwgvb ; /usr/bin/python3'
Jan 22 13:01:59 np0005592158.novalocal sudo[26942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:59 np0005592158.novalocal python3[26954]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:59 np0005592158.novalocal sudo[26942]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:00 np0005592158.novalocal sudo[27267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naljjmvsztfsfggbtrcsyqjurfzdjpfm ; /usr/bin/python3'
Jan 22 13:02:00 np0005592158.novalocal sudo[27267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:00 np0005592158.novalocal python3[27269]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:02:00 np0005592158.novalocal sudo[27267]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:01 np0005592158.novalocal sudo[27496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pownjlvzuymjidwjihgvttnjvdigoqic ; /usr/bin/python3'
Jan 22 13:02:01 np0005592158.novalocal sudo[27496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:01 np0005592158.novalocal python3[27498]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086920.3643124-168-2838513463597/source _original_basename=tmpfe18g3ex follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:02:01 np0005592158.novalocal sudo[27496]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:01 np0005592158.novalocal sudo[27829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfonvuqrmsminbfvjyeezgcgoduwwqwa ; /usr/bin/python3'
Jan 22 13:02:01 np0005592158.novalocal sudo[27829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:02 np0005592158.novalocal python3[27838]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Jan 22 13:02:02 np0005592158.novalocal systemd[1]: Starting Hostname Service...
Jan 22 13:02:02 np0005592158.novalocal systemd[1]: Started Hostname Service.
Jan 22 13:02:02 np0005592158.novalocal systemd-hostnamed[27942]: Changed pretty hostname to 'compute-1'
Jan 22 13:02:02 compute-1 systemd-hostnamed[27942]: Hostname set to <compute-1> (static)
Jan 22 13:02:02 compute-1 NetworkManager[7197]: <info>  [1769086922.3286] hostname: static hostname changed from "np0005592158.novalocal" to "compute-1"
Jan 22 13:02:02 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:02:02 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:02:02 compute-1 sudo[27829]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:02 compute-1 sshd-session[24864]: Connection closed by 38.102.83.114 port 56936
Jan 22 13:02:02 compute-1 sshd-session[24809]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:02:02 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 13:02:02 compute-1 systemd[1]: session-7.scope: Consumed 2.395s CPU time.
Jan 22 13:02:02 compute-1 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Jan 22 13:02:02 compute-1 systemd-logind[787]: Removed session 7.
Jan 22 13:02:11 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:02:11 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:02:11 compute-1 systemd[1]: man-db-cache-update.service: Consumed 59.950s CPU time.
Jan 22 13:02:11 compute-1 systemd[1]: run-r1c0f0d83d91a4d6f8507d7ad1a74983e.service: Deactivated successfully.
Jan 22 13:02:12 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:02:32 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 13:04:09 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 13:04:10 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 13:04:10 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 13:04:10 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 13:07:01 compute-1 sshd-session[29935]: Accepted publickey for zuul from 38.102.83.41 port 43034 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:07:01 compute-1 systemd-logind[787]: New session 8 of user zuul.
Jan 22 13:07:01 compute-1 systemd[1]: Started Session 8 of User zuul.
Jan 22 13:07:01 compute-1 sshd-session[29935]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:07:01 compute-1 python3[30011]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:07:03 compute-1 sudo[30125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epyikmvvfqxjecwwbyshjnsrmssuqire ; /usr/bin/python3'
Jan 22 13:07:03 compute-1 sudo[30125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:03 compute-1 python3[30127]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:03 compute-1 sudo[30125]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:03 compute-1 sudo[30198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtqcnrdnuwwqtazekdsmbzwunwvcfus ; /usr/bin/python3'
Jan 22 13:07:03 compute-1 sudo[30198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:03 compute-1 python3[30200]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:03 compute-1 sudo[30198]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-1 sudo[30224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atcjwmmwemicdqscccnsfgvgdchiolhq ; /usr/bin/python3'
Jan 22 13:07:04 compute-1 sudo[30224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-1 python3[30226]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:04 compute-1 sudo[30224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-1 sudo[30297]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkeldiepuysgihuoewgarnzzetbpuxez ; /usr/bin/python3'
Jan 22 13:07:04 compute-1 sudo[30297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-1 python3[30299]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:04 compute-1 sudo[30297]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-1 sudo[30323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfixudepedavqrflxwspzflmjgwxarg ; /usr/bin/python3'
Jan 22 13:07:04 compute-1 sudo[30323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-1 python3[30325]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:04 compute-1 sudo[30323]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-1 sudo[30396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwvwfbyloxlkoowdmnyzsqpqblqtchyf ; /usr/bin/python3'
Jan 22 13:07:05 compute-1 sudo[30396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-1 python3[30398]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:05 compute-1 sudo[30396]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-1 sudo[30422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhcvzyiwpowauadppqphajdiwviywob ; /usr/bin/python3'
Jan 22 13:07:05 compute-1 sudo[30422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-1 python3[30424]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:05 compute-1 sudo[30422]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-1 sudo[30495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwgvthzgxwfqlfdxehqjzasrbzuowdf ; /usr/bin/python3'
Jan 22 13:07:05 compute-1 sudo[30495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-1 python3[30497]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:05 compute-1 sudo[30495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-1 sudo[30521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzrpygztnecfwxnjrynwpawvudrpufd ; /usr/bin/python3'
Jan 22 13:07:05 compute-1 sudo[30521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-1 python3[30523]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:06 compute-1 sudo[30521]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-1 sudo[30594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhurawwenumjdcvpxixjmkovypbhfxql ; /usr/bin/python3'
Jan 22 13:07:06 compute-1 sudo[30594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-1 python3[30596]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:06 compute-1 sudo[30594]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-1 sudo[30620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvomtqmflbfjvimkbmvjgszdprqxgzkl ; /usr/bin/python3'
Jan 22 13:07:06 compute-1 sudo[30620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-1 python3[30622]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:06 compute-1 sudo[30620]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:07 compute-1 sudo[30693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zelbtuajrodydgvfaimssoxrbqubkbwb ; /usr/bin/python3'
Jan 22 13:07:07 compute-1 sudo[30693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:07 compute-1 python3[30695]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:07 compute-1 sudo[30693]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:07 compute-1 sudo[30719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-devkbdlrndyekczrlmkcknepwrvozmyi ; /usr/bin/python3'
Jan 22 13:07:07 compute-1 sudo[30719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:07 compute-1 python3[30721]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:07 compute-1 sudo[30719]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:07 compute-1 sudo[30792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oopiqpnmtfoepznvgdvhqbyxvblyfnjq ; /usr/bin/python3'
Jan 22 13:07:07 compute-1 sudo[30792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:07 compute-1 python3[30794]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:07 compute-1 sudo[30792]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:19 compute-1 python3[30842]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:12:19 compute-1 sshd-session[29938]: Received disconnect from 38.102.83.41 port 43034:11: disconnected by user
Jan 22 13:12:19 compute-1 sshd-session[29938]: Disconnected from user zuul 38.102.83.41 port 43034
Jan 22 13:12:19 compute-1 sshd-session[29935]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:12:19 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 13:12:19 compute-1 systemd[1]: session-8.scope: Consumed 5.585s CPU time.
Jan 22 13:12:19 compute-1 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Jan 22 13:12:19 compute-1 systemd-logind[787]: Removed session 8.
Jan 22 13:15:00 compute-1 sshd-session[30846]: Invalid user user from 45.148.10.121 port 42826
Jan 22 13:15:00 compute-1 sshd-session[30846]: Connection closed by invalid user user 45.148.10.121 port 42826 [preauth]
Jan 22 13:20:01 compute-1 anacron[8883]: Job `cron.daily' started
Jan 22 13:20:01 compute-1 anacron[8883]: Job `cron.daily' terminated
Jan 22 13:21:58 compute-1 sshd-session[30855]: Accepted publickey for zuul from 192.168.122.30 port 32830 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:21:58 compute-1 systemd-logind[787]: New session 9 of user zuul.
Jan 22 13:21:58 compute-1 systemd[1]: Started Session 9 of User zuul.
Jan 22 13:21:58 compute-1 sshd-session[30855]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:21:59 compute-1 python3.9[31008]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:00 compute-1 sudo[31188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwzwewkpeysekyrujswwkqgwryzersu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088120.1717405-58-228129773825253/AnsiballZ_command.py'
Jan 22 13:22:00 compute-1 sudo[31188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:00 compute-1 python3.9[31190]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:22:01 compute-1 sshd-session[30853]: Invalid user ubuntu from 116.169.59.117 port 38294
Jan 22 13:22:01 compute-1 sshd-session[30853]: Received disconnect from 116.169.59.117 port 38294:11:  [preauth]
Jan 22 13:22:01 compute-1 sshd-session[30853]: Disconnected from invalid user ubuntu 116.169.59.117 port 38294 [preauth]
Jan 22 13:22:09 compute-1 sudo[31188]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:11 compute-1 sshd-session[30858]: Connection closed by 192.168.122.30 port 32830
Jan 22 13:22:11 compute-1 sshd-session[30855]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:22:11 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 13:22:11 compute-1 systemd[1]: session-9.scope: Consumed 8.471s CPU time.
Jan 22 13:22:11 compute-1 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Jan 22 13:22:11 compute-1 systemd-logind[787]: Removed session 9.
Jan 22 13:22:26 compute-1 sshd-session[31247]: Accepted publickey for zuul from 192.168.122.30 port 48668 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:22:26 compute-1 systemd-logind[787]: New session 10 of user zuul.
Jan 22 13:22:26 compute-1 systemd[1]: Started Session 10 of User zuul.
Jan 22 13:22:26 compute-1 sshd-session[31247]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:22:27 compute-1 python3.9[31400]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 13:22:29 compute-1 python3.9[31574]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:29 compute-1 sudo[31724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmvgjsfwlpppofbcllurkiwlcpobtvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088149.4259677-94-113264844427429/AnsiballZ_command.py'
Jan 22 13:22:29 compute-1 sudo[31724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:30 compute-1 python3.9[31726]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:22:30 compute-1 sudo[31724]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:31 compute-1 sudo[31877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsagkhuapsizyevgvywfmkbjfmcbxmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088150.5430155-130-108295784352620/AnsiballZ_stat.py'
Jan 22 13:22:31 compute-1 sudo[31877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:31 compute-1 python3.9[31879]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:22:31 compute-1 sudo[31877]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:31 compute-1 sudo[32029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdeavfoykbtowyoiqjopqkqqyfezxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088151.5174515-154-129533657432450/AnsiballZ_file.py'
Jan 22 13:22:31 compute-1 sudo[32029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:32 compute-1 python3.9[32031]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:32 compute-1 sudo[32029]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:32 compute-1 sudo[32181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufykqtnvyzgllifzthvefuxhplpicssz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088152.4621327-178-18361954212508/AnsiballZ_stat.py'
Jan 22 13:22:32 compute-1 sudo[32181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:32 compute-1 python3.9[32183]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:22:32 compute-1 sudo[32181]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:33 compute-1 sudo[32304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsricajlttbbpjatzjvndzvnrluwyyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088152.4621327-178-18361954212508/AnsiballZ_copy.py'
Jan 22 13:22:33 compute-1 sudo[32304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:33 compute-1 python3.9[32306]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088152.4621327-178-18361954212508/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:33 compute-1 sudo[32304]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:34 compute-1 sudo[32456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbntgzsmvpsvtmmzuirbrjbjoxhaxprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088153.9308953-223-22687355972007/AnsiballZ_setup.py'
Jan 22 13:22:34 compute-1 sudo[32456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:34 compute-1 python3.9[32458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:34 compute-1 sudo[32456]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:35 compute-1 sudo[32612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvoudjhghvczidnbgzfrcauufwmsfxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088154.9667-247-51506447844059/AnsiballZ_file.py'
Jan 22 13:22:35 compute-1 sudo[32612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:35 compute-1 python3.9[32614]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:22:35 compute-1 sudo[32612]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:36 compute-1 sudo[32764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxthhepyxpdkwrnsxtxtkcjsdycknbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088155.7841773-274-72854470449818/AnsiballZ_file.py'
Jan 22 13:22:36 compute-1 sudo[32764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:36 compute-1 python3.9[32766]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:22:36 compute-1 sudo[32764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:37 compute-1 python3.9[32916]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:22:42 compute-1 python3.9[33169]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:43 compute-1 python3.9[33319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:44 compute-1 python3.9[33473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:45 compute-1 sudo[33630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrcxsgwctnsktcgvdktgjpeymmipkil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088164.959941-418-225211129139696/AnsiballZ_setup.py'
Jan 22 13:22:45 compute-1 sudo[33630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:45 compute-1 python3.9[33632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:22:45 compute-1 sudo[33630]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:46 compute-1 sudo[33714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtbzyxhvjnrnifzngsmhmlrfmpbnfbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088164.959941-418-225211129139696/AnsiballZ_dnf.py'
Jan 22 13:22:46 compute-1 sudo[33714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:46 compute-1 python3.9[33716]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:23:35 compute-1 systemd[1]: Reloading.
Jan 22 13:23:35 compute-1 systemd-rc-local-generator[33915]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:35 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 13:23:37 compute-1 systemd[1]: Reloading.
Jan 22 13:23:37 compute-1 systemd-rc-local-generator[33955]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:37 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 13:23:37 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 13:23:37 compute-1 systemd[1]: Reloading.
Jan 22 13:23:37 compute-1 systemd-rc-local-generator[33998]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:37 compute-1 systemd[1]: Starting dnf makecache...
Jan 22 13:23:37 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 13:23:38 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:23:38 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:23:38 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:23:38 compute-1 dnf[34006]: Failed determining last makecache time.
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-barbican-42b4c41831408a8e323 132 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 190 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-cinder-1c00d6490d88e436f26ef 160 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-stevedore-c4acc5639fd2329372142 175 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-cloudkitty-tests-tempest-2c80f8 192 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-os-refresh-config-9bfc52b5049be2d8de61 162 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 169 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-designate-tests-tempest-347fdbc 171 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-glance-1fd12c29b339f30fe823e 178 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 173 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-manila-3c01b7181572c95dac462 154 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-whitebox-neutron-tests-tempest- 152 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-octavia-ba397f07a7331190208c 175 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-watcher-c014f81a8647287f6dcc 165 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-ansible-config_template-5ccaa22121a7ff 158 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 158 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-swift-dc98a8463506ac520c469a 154 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-python-tempestconf-8515371b7cceebd4282 172 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: delorean-openstack-heat-ui-013accbfd179753bc3f0 135 kB/s | 3.0 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: CentOS Stream 9 - BaseOS                         72 kB/s | 6.7 kB     00:00
Jan 22 13:23:38 compute-1 dnf[34006]: CentOS Stream 9 - AppStream                      61 kB/s | 6.8 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: CentOS Stream 9 - CRB                            28 kB/s | 6.6 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: CentOS Stream 9 - Extras packages                33 kB/s | 7.3 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: dlrn-antelope-testing                           171 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: dlrn-antelope-build-deps                        185 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: centos9-rabbitmq                                137 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: centos9-storage                                 135 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: centos9-opstools                                130 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: NFV SIG OpenvSwitch                             137 kB/s | 3.0 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: repo-setup-centos-appstream                     161 kB/s | 4.4 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: repo-setup-centos-baseos                        154 kB/s | 3.9 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: repo-setup-centos-highavailability              166 kB/s | 3.9 kB     00:00
Jan 22 13:23:39 compute-1 dnf[34006]: repo-setup-centos-powertools                    193 kB/s | 4.3 kB     00:00
Jan 22 13:23:40 compute-1 sshd-session[34064]: Connection closed by 154.41.135.50 port 31480 [preauth]
Jan 22 13:23:40 compute-1 dnf[34006]: Extra Packages for Enterprise Linux 9 - x86_64   27 kB/s |  25 kB     00:00
Jan 22 13:23:41 compute-1 dnf[34006]: Metadata cache created.
Jan 22 13:23:41 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 13:23:41 compute-1 systemd[1]: Finished dnf makecache.
Jan 22 13:23:41 compute-1 systemd[1]: dnf-makecache.service: Consumed 1.985s CPU time.
Jan 22 13:24:45 compute-1 kernel: SELinux:  Converting 2725 SID table entries...
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:24:45 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:24:45 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 13:24:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:24:46 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:24:46 compute-1 systemd[1]: Reloading.
Jan 22 13:24:46 compute-1 systemd-rc-local-generator[34390]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:24:46 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:24:46 compute-1 sudo[33714]: pam_unix(sudo:session): session closed for user root
Jan 22 13:24:47 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:24:47 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:24:47 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.316s CPU time.
Jan 22 13:24:47 compute-1 systemd[1]: run-r201b7a5edb474e1fb1173c958de17902.service: Deactivated successfully.
Jan 22 13:24:57 compute-1 sudo[35298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcoijpkgizkqwjhhptdzilibltllthxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088297.2829187-455-223060980179832/AnsiballZ_command.py'
Jan 22 13:24:57 compute-1 sudo[35298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:24:57 compute-1 python3.9[35300]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:24:58 compute-1 sudo[35298]: pam_unix(sudo:session): session closed for user root
Jan 22 13:24:59 compute-1 sudo[35579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxuqmljuxgpvkufttgvbckhjzyhlvajg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088298.9698646-478-267381746824103/AnsiballZ_selinux.py'
Jan 22 13:24:59 compute-1 sudo[35579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:24:59 compute-1 python3.9[35581]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 13:24:59 compute-1 sudo[35579]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:00 compute-1 sudo[35731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfclcsqqngnhpvlsaifixomdsdekzuon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088300.4321504-511-128799764286468/AnsiballZ_command.py'
Jan 22 13:25:00 compute-1 sudo[35731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:00 compute-1 python3.9[35733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 13:25:03 compute-1 sudo[35731]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:03 compute-1 sudo[35885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftmfzuvabsrzinlgmgltrwkrcbbiikap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088303.5462606-535-273691433618850/AnsiballZ_file.py'
Jan 22 13:25:03 compute-1 sudo[35885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:05 compute-1 python3.9[35887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:05 compute-1 sudo[35885]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:06 compute-1 sudo[36037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgfbiggxqtmuunvppnjfcdejditcwrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088305.9412327-559-43832493386758/AnsiballZ_mount.py'
Jan 22 13:25:06 compute-1 sudo[36037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:08 compute-1 python3.9[36039]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 13:25:08 compute-1 sudo[36037]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:11 compute-1 sudo[36189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mugdkfftwzrsxsbsryfhnhszxzcyhrdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.1718905-643-271265279065604/AnsiballZ_file.py'
Jan 22 13:25:11 compute-1 sudo[36189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:11 compute-1 python3.9[36191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:11 compute-1 sudo[36189]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:12 compute-1 sudo[36341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlexfstghisjkungqjsxvzyyjoaurrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.9991283-668-128024124059447/AnsiballZ_stat.py'
Jan 22 13:25:12 compute-1 sudo[36341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:12 compute-1 python3.9[36343]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:12 compute-1 sudo[36341]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:13 compute-1 sudo[36464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdzbysnuzjhgncahrbenxfbvimihykq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.9991283-668-128024124059447/AnsiballZ_copy.py'
Jan 22 13:25:13 compute-1 sudo[36464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:13 compute-1 python3.9[36466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088311.9991283-668-128024124059447/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:13 compute-1 sudo[36464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:14 compute-1 sudo[36616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhgityrhacmsxozdclbkoltnrfxdaskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088314.2974787-739-143499759024977/AnsiballZ_stat.py'
Jan 22 13:25:14 compute-1 sudo[36616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:14 compute-1 python3.9[36618]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:14 compute-1 sudo[36616]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:15 compute-1 sudo[36768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcfgsjimkthzafhnopfekfqoeqlouzxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088315.0790412-763-190800681748568/AnsiballZ_command.py'
Jan 22 13:25:15 compute-1 sudo[36768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:15 compute-1 python3.9[36770]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:15 compute-1 sudo[36768]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:16 compute-1 sudo[36921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqttdnblbayqtmvuaruwcqlmirlyxoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088315.8405757-787-631041552843/AnsiballZ_file.py'
Jan 22 13:25:16 compute-1 sudo[36921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:16 compute-1 python3.9[36923]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:16 compute-1 sudo[36921]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:17 compute-1 sudo[37073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lemiuydhrktvvdauidbjndalsetqrvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088316.9846563-820-171169722435474/AnsiballZ_getent.py'
Jan 22 13:25:17 compute-1 sudo[37073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:17 compute-1 python3.9[37075]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 13:25:17 compute-1 sudo[37073]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:17 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:25:17 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:25:18 compute-1 sudo[37227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadnqysqjklcxwckmlshpoajsyxxgpjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088318.0104613-844-276727982693407/AnsiballZ_group.py'
Jan 22 13:25:18 compute-1 sudo[37227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:18 compute-1 python3.9[37229]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:25:18 compute-1 groupadd[37230]: group added to /etc/group: name=qemu, GID=107
Jan 22 13:25:18 compute-1 groupadd[37230]: group added to /etc/gshadow: name=qemu
Jan 22 13:25:18 compute-1 groupadd[37230]: new group: name=qemu, GID=107
Jan 22 13:25:18 compute-1 sudo[37227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:19 compute-1 sudo[37385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgwvztesgzqcwzaiqghnwslwhvezmgmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088319.2535572-868-229677669309715/AnsiballZ_user.py'
Jan 22 13:25:19 compute-1 sudo[37385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:20 compute-1 python3.9[37387]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:25:20 compute-1 useradd[37389]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:25:20 compute-1 sudo[37385]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:20 compute-1 sudo[37545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qorsmzinophfsmxeadanslbjgplybgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088320.5328703-892-98950761211963/AnsiballZ_getent.py'
Jan 22 13:25:20 compute-1 sudo[37545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:21 compute-1 python3.9[37547]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 13:25:21 compute-1 sudo[37545]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:21 compute-1 sudo[37698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzeogbwjvidbubdpxhytbyermkrnnxiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088321.2814987-916-41404680547725/AnsiballZ_group.py'
Jan 22 13:25:21 compute-1 sudo[37698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:21 compute-1 python3.9[37700]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:25:21 compute-1 groupadd[37701]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 22 13:25:21 compute-1 groupadd[37701]: group added to /etc/gshadow: name=hugetlbfs
Jan 22 13:25:21 compute-1 groupadd[37701]: new group: name=hugetlbfs, GID=42477
Jan 22 13:25:21 compute-1 sudo[37698]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:22 compute-1 sudo[37856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfokpdyckhvubfcymzsdyghdtyywmcfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088322.1901357-943-29583211116366/AnsiballZ_file.py'
Jan 22 13:25:22 compute-1 sudo[37856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:22 compute-1 python3.9[37858]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 13:25:22 compute-1 sudo[37856]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:23 compute-1 sudo[38008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfajgwajiqkskhmhwurmqrrqlvsbjbqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088323.2392697-976-118971249755483/AnsiballZ_dnf.py'
Jan 22 13:25:23 compute-1 sudo[38008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:23 compute-1 python3.9[38010]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:25:28 compute-1 sudo[38008]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:29 compute-1 sudo[38163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuknhxggpjnpnrmepptdbhtgrglbfwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088328.9955518-1001-200418119870310/AnsiballZ_file.py'
Jan 22 13:25:29 compute-1 sudo[38163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:29 compute-1 python3.9[38165]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:29 compute-1 sudo[38163]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:30 compute-1 sudo[38315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duwofugfejognzxsbfbrjjocppofagnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088329.704591-1024-74408571587814/AnsiballZ_stat.py'
Jan 22 13:25:30 compute-1 sudo[38315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:30 compute-1 python3.9[38317]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:30 compute-1 sudo[38315]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:30 compute-1 sudo[38438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofgvlowpjdpsmyuyyhwlgspzrojlgrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088329.704591-1024-74408571587814/AnsiballZ_copy.py'
Jan 22 13:25:30 compute-1 sudo[38438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:30 compute-1 python3.9[38440]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088329.704591-1024-74408571587814/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:30 compute-1 sudo[38438]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:31 compute-1 sudo[38590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glrwgfmuxctgerzmktumacrlzgpmwwyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088331.0407536-1069-93602401150546/AnsiballZ_systemd.py'
Jan 22 13:25:31 compute-1 sudo[38590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:32 compute-1 python3.9[38592]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:25:32 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:25:32 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 13:25:32 compute-1 kernel: Bridge firewalling registered
Jan 22 13:25:32 compute-1 systemd-modules-load[38596]: Inserted module 'br_netfilter'
Jan 22 13:25:32 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:25:32 compute-1 sudo[38590]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:32 compute-1 sudo[38749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edgtcuryorqbjxayyuyyepzzbvivjfvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088332.4975896-1094-131165375167901/AnsiballZ_stat.py'
Jan 22 13:25:32 compute-1 sudo[38749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:32 compute-1 python3.9[38751]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:33 compute-1 sudo[38749]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:33 compute-1 sudo[38872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lypvnmhwyccocmlzmvpdwgmyhpxttyeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088332.4975896-1094-131165375167901/AnsiballZ_copy.py'
Jan 22 13:25:33 compute-1 sudo[38872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:33 compute-1 python3.9[38874]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088332.4975896-1094-131165375167901/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:33 compute-1 sudo[38872]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:34 compute-1 sudo[39024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgsodwuzwyhvnkwvpfsghquzfmqtguaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088334.2948818-1147-204224964680170/AnsiballZ_dnf.py'
Jan 22 13:25:34 compute-1 sudo[39024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:34 compute-1 python3.9[39026]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:25:38 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:25:38 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:25:39 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:25:39 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:25:39 compute-1 systemd[1]: Reloading.
Jan 22 13:25:39 compute-1 systemd-rc-local-generator[39089]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:39 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:25:39 compute-1 sudo[39024]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:40 compute-1 python3.9[40393]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:41 compute-1 python3.9[41369]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 13:25:42 compute-1 python3.9[42189]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:43 compute-1 sudo[43135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woindaaraecvtpirpuhqstuvzmfkupyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088342.9131587-1264-208185352074952/AnsiballZ_command.py'
Jan 22 13:25:43 compute-1 sudo[43135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:43 compute-1 python3.9[43154]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:43 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:25:43 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:25:43 compute-1 systemd[1]: man-db-cache-update.service: Consumed 5.176s CPU time.
Jan 22 13:25:43 compute-1 systemd[1]: run-rdb45f3af7e234bd88beeec9e29a23930.service: Deactivated successfully.
Jan 22 13:25:43 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:25:43 compute-1 systemd[1]: Starting Authorization Manager...
Jan 22 13:25:43 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:25:44 compute-1 polkitd[43403]: Started polkitd version 0.117
Jan 22 13:25:44 compute-1 polkitd[43403]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:25:44 compute-1 polkitd[43403]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:25:44 compute-1 polkitd[43403]: Finished loading, compiling and executing 2 rules
Jan 22 13:25:44 compute-1 systemd[1]: Started Authorization Manager.
Jan 22 13:25:44 compute-1 polkitd[43403]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 22 13:25:44 compute-1 sudo[43135]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:44 compute-1 sudo[43571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkljndluzxxbxrjxtzfjdfqvsmsosdub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088344.5557468-1291-168637587112921/AnsiballZ_systemd.py'
Jan 22 13:25:44 compute-1 sudo[43571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:45 compute-1 python3.9[43573]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:45 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 13:25:45 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 13:25:45 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 13:25:45 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:25:45 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:25:45 compute-1 sudo[43571]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:46 compute-1 python3.9[43735]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 13:25:50 compute-1 sudo[43885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfjmvrjchwsgdklusxdulldooalscshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088349.8324335-1462-62366900634315/AnsiballZ_systemd.py'
Jan 22 13:25:50 compute-1 sudo[43885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:50 compute-1 python3.9[43887]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:50 compute-1 systemd[1]: Reloading.
Jan 22 13:25:50 compute-1 systemd-rc-local-generator[43917]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:50 compute-1 sudo[43885]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:51 compute-1 sudo[44074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odiuqpwtquwssrqrwljzfbfeidgiijau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088351.0092351-1462-254159739189321/AnsiballZ_systemd.py'
Jan 22 13:25:51 compute-1 sudo[44074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:51 compute-1 python3.9[44076]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:51 compute-1 systemd[1]: Reloading.
Jan 22 13:25:51 compute-1 systemd-rc-local-generator[44106]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:52 compute-1 sudo[44074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:52 compute-1 sudo[44263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozqplasfnzbvfgziqnkmalkukqobnykx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088352.6133657-1510-9193376791216/AnsiballZ_command.py'
Jan 22 13:25:52 compute-1 sudo[44263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:53 compute-1 python3.9[44265]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:53 compute-1 sudo[44263]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:53 compute-1 sudo[44416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiququmstdltuvkfmggwqdomejffmvnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088353.4079173-1535-48629528258192/AnsiballZ_command.py'
Jan 22 13:25:53 compute-1 sudo[44416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:53 compute-1 python3.9[44418]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:53 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 13:25:53 compute-1 sudo[44416]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:54 compute-1 sudo[44569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljoexxihbeqllzhcztngxyihotgcdsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088354.2249382-1558-142199984219057/AnsiballZ_command.py'
Jan 22 13:25:54 compute-1 sudo[44569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:54 compute-1 python3.9[44571]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:55 compute-1 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 22 13:25:55 compute-1 irqbalance[785]: IRQ 26 affinity is now unmanaged
Jan 22 13:25:56 compute-1 sudo[44569]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:56 compute-1 sudo[44731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqbppchgxvzhvqlhexsowcucdrokvqsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088356.5078244-1582-264879101244757/AnsiballZ_command.py'
Jan 22 13:25:56 compute-1 sudo[44731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:56 compute-1 python3.9[44733]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:57 compute-1 sudo[44731]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:57 compute-1 sudo[44884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llqenouhfdgbqkdsnfjamixmghigqomm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088357.3233674-1606-186841432234552/AnsiballZ_systemd.py'
Jan 22 13:25:57 compute-1 sudo[44884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:57 compute-1 python3.9[44886]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:25:57 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 13:25:57 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 13:25:57 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 13:25:57 compute-1 systemd[1]: Starting Apply Kernel Variables...
Jan 22 13:25:57 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 13:25:57 compute-1 systemd[1]: Finished Apply Kernel Variables.
Jan 22 13:25:58 compute-1 sudo[44884]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:58 compute-1 sshd-session[31250]: Connection closed by 192.168.122.30 port 48668
Jan 22 13:25:58 compute-1 sshd-session[31247]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:25:58 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 13:25:58 compute-1 systemd[1]: session-10.scope: Consumed 2min 28.885s CPU time.
Jan 22 13:25:58 compute-1 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Jan 22 13:25:58 compute-1 systemd-logind[787]: Removed session 10.
Jan 22 13:26:03 compute-1 sshd-session[44917]: Accepted publickey for zuul from 192.168.122.30 port 54002 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:26:03 compute-1 systemd-logind[787]: New session 11 of user zuul.
Jan 22 13:26:03 compute-1 systemd[1]: Started Session 11 of User zuul.
Jan 22 13:26:03 compute-1 sshd-session[44917]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:26:04 compute-1 python3.9[45070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:05 compute-1 sudo[45224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whtbuuokuxekfczhrzsdqhikrbkipfgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088365.5024037-69-250427189465478/AnsiballZ_getent.py'
Jan 22 13:26:05 compute-1 sudo[45224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:06 compute-1 python3.9[45226]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 13:26:06 compute-1 sudo[45224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:06 compute-1 sudo[45377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljdcjixbfmmdliapinxlbdpcwfdzrnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088366.4196804-93-273726288549880/AnsiballZ_group.py'
Jan 22 13:26:06 compute-1 sudo[45377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:07 compute-1 python3.9[45379]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:26:07 compute-1 groupadd[45380]: group added to /etc/group: name=openvswitch, GID=42476
Jan 22 13:26:07 compute-1 groupadd[45380]: group added to /etc/gshadow: name=openvswitch
Jan 22 13:26:07 compute-1 groupadd[45380]: new group: name=openvswitch, GID=42476
Jan 22 13:26:07 compute-1 sudo[45377]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:07 compute-1 sudo[45535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjvdcuwssuqxdwmhpihhelbjxpydbux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088367.4132497-117-150513964895576/AnsiballZ_user.py'
Jan 22 13:26:07 compute-1 sudo[45535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:08 compute-1 python3.9[45537]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:26:08 compute-1 useradd[45539]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:26:08 compute-1 useradd[45539]: add 'openvswitch' to group 'hugetlbfs'
Jan 22 13:26:08 compute-1 useradd[45539]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 22 13:26:08 compute-1 sudo[45535]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:08 compute-1 sudo[45695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrdwdikyhfeqqgrlojmtkrcpqcyzrkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088368.6594548-147-151459460795080/AnsiballZ_setup.py'
Jan 22 13:26:08 compute-1 sudo[45695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:09 compute-1 python3.9[45697]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:26:09 compute-1 sudo[45695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:09 compute-1 sudo[45779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvhrkeqwjpaeesptwbpztntjhtfwqqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088368.6594548-147-151459460795080/AnsiballZ_dnf.py'
Jan 22 13:26:09 compute-1 sudo[45779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:10 compute-1 python3.9[45781]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:26:13 compute-1 sudo[45779]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:14 compute-1 sudo[45942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihwzjivnrlffxogqiiwrxxvbttxqoous ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088374.0512776-189-217568094061918/AnsiballZ_dnf.py'
Jan 22 13:26:14 compute-1 sudo[45942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:14 compute-1 python3.9[45944]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:29 compute-1 kernel: SELinux:  Converting 2737 SID table entries...
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:26:29 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:26:30 compute-1 groupadd[45967]: group added to /etc/group: name=unbound, GID=994
Jan 22 13:26:30 compute-1 groupadd[45967]: group added to /etc/gshadow: name=unbound
Jan 22 13:26:30 compute-1 groupadd[45967]: new group: name=unbound, GID=994
Jan 22 13:26:30 compute-1 useradd[45974]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 22 13:26:30 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 13:26:30 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 13:26:31 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:26:31 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:26:31 compute-1 systemd[1]: Reloading.
Jan 22 13:26:31 compute-1 systemd-rc-local-generator[46472]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:31 compute-1 systemd-sysv-generator[46475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:31 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:26:32 compute-1 sudo[45942]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:32 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:26:32 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:26:32 compute-1 systemd[1]: run-r10f21485168d40538f29106072465696.service: Deactivated successfully.
Jan 22 13:26:33 compute-1 sudo[47041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jovsqrpnlextmxjoajizflqehrjhwozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088392.6871874-213-154777635576372/AnsiballZ_systemd.py'
Jan 22 13:26:33 compute-1 sudo[47041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:33 compute-1 python3.9[47043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:26:33 compute-1 systemd[1]: Reloading.
Jan 22 13:26:33 compute-1 systemd-rc-local-generator[47072]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:33 compute-1 systemd-sysv-generator[47076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:33 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 13:26:33 compute-1 chown[47086]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 13:26:34 compute-1 ovs-ctl[47091]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 13:26:34 compute-1 ovs-ctl[47091]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 13:26:34 compute-1 ovs-ctl[47091]: Starting ovsdb-server [  OK  ]
Jan 22 13:26:34 compute-1 ovs-vsctl[47140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 13:26:34 compute-1 ovs-vsctl[47160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c803af81-5cf0-46ac-8f46-401e876a838c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 13:26:34 compute-1 ovs-ctl[47091]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 13:26:34 compute-1 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 22 13:26:34 compute-1 ovs-ctl[47091]: Enabling remote OVSDB managers [  OK  ]
Jan 22 13:26:34 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 13:26:34 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 13:26:34 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 13:26:34 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 13:26:34 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 13:26:34 compute-1 ovs-ctl[47210]: Inserting openvswitch module [  OK  ]
Jan 22 13:26:34 compute-1 ovs-ctl[47179]: Starting ovs-vswitchd [  OK  ]
Jan 22 13:26:34 compute-1 ovs-vsctl[47230]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 22 13:26:34 compute-1 ovs-ctl[47179]: Enabling remote OVSDB managers [  OK  ]
Jan 22 13:26:34 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 13:26:34 compute-1 systemd[1]: Starting Open vSwitch...
Jan 22 13:26:34 compute-1 systemd[1]: Finished Open vSwitch.
Jan 22 13:26:34 compute-1 sudo[47041]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:36 compute-1 python3.9[47382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:37 compute-1 sudo[47532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urfqmllmqetejomywngzzflxetbjzecl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088397.0929224-267-233731253336883/AnsiballZ_sefcontext.py'
Jan 22 13:26:37 compute-1 sudo[47532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:37 compute-1 python3.9[47534]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 13:26:39 compute-1 kernel: SELinux:  Converting 2751 SID table entries...
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:26:39 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:26:39 compute-1 sudo[47532]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:40 compute-1 python3.9[47689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:41 compute-1 sudo[47845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdbowueuwdeqnnydvshqdblyjilrhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088401.1720924-321-217920277301972/AnsiballZ_dnf.py'
Jan 22 13:26:41 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 13:26:41 compute-1 sudo[47845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:41 compute-1 python3.9[47847]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:43 compute-1 sudo[47845]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:43 compute-1 sudo[47998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agejmdernvlkylejchgstqrzkivrhscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088403.574886-345-37006953712234/AnsiballZ_command.py'
Jan 22 13:26:43 compute-1 sudo[47998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:44 compute-1 python3.9[48000]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:26:45 compute-1 sudo[47998]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:45 compute-1 sudo[48285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqmbonkjpnsfggdtlohbrdzqmcdssgrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088405.2779343-369-181101794121299/AnsiballZ_file.py'
Jan 22 13:26:45 compute-1 sudo[48285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:45 compute-1 python3.9[48287]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 13:26:45 compute-1 sudo[48285]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:46 compute-1 python3.9[48437]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:26:47 compute-1 sudo[48589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzehmiwpsjjabqhryhklopfakebdfcnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088407.181228-417-221311801165336/AnsiballZ_dnf.py'
Jan 22 13:26:47 compute-1 sudo[48589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:47 compute-1 python3.9[48591]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:50 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:26:50 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:26:50 compute-1 systemd[1]: Reloading.
Jan 22 13:26:50 compute-1 systemd-sysv-generator[48637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:50 compute-1 systemd-rc-local-generator[48634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:50 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:26:51 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:26:51 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:26:51 compute-1 systemd[1]: run-r474fc28a3a63429a994610028b3c1011.service: Deactivated successfully.
Jan 22 13:26:51 compute-1 sudo[48589]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:52 compute-1 sudo[48906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfkmifdsyxnalmijgkbeolubgejzsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088411.9718451-441-269366382456836/AnsiballZ_systemd.py'
Jan 22 13:26:52 compute-1 sudo[48906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:52 compute-1 python3.9[48908]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:26:52 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 13:26:52 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 13:26:52 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 13:26:52 compute-1 systemd[1]: Stopping Network Manager...
Jan 22 13:26:52 compute-1 NetworkManager[7197]: <info>  [1769088412.5976] caught SIGTERM, shutting down normally.
Jan 22 13:26:52 compute-1 NetworkManager[7197]: <info>  [1769088412.6003] dhcp4 (eth0): canceled DHCP transaction
Jan 22 13:26:52 compute-1 NetworkManager[7197]: <info>  [1769088412.6004] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:26:52 compute-1 NetworkManager[7197]: <info>  [1769088412.6004] dhcp4 (eth0): state changed no lease
Jan 22 13:26:52 compute-1 NetworkManager[7197]: <info>  [1769088412.6007] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 13:26:52 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:26:52 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:26:53 compute-1 NetworkManager[7197]: <info>  [1769088413.2344] exiting (success)
Jan 22 13:26:53 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 13:26:53 compute-1 systemd[1]: Stopped Network Manager.
Jan 22 13:26:53 compute-1 systemd[1]: NetworkManager.service: Consumed 14.762s CPU time, 4.1M memory peak, read 0B from disk, written 20.0K to disk.
Jan 22 13:26:53 compute-1 systemd[1]: Starting Network Manager...
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3019] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3023] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3094] manager[0x557f129f7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 13:26:53 compute-1 systemd[1]: Starting Hostname Service...
Jan 22 13:26:53 compute-1 systemd[1]: Started Hostname Service.
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3916] hostname: hostname: using hostnamed
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3917] hostname: static hostname changed from (none) to "compute-1"
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3928] manager[0x557f129f7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3928] manager[0x557f129f7000]: rfkill: WWAN hardware radio set enabled
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3949] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3958] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3959] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3959] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3960] manager: Networking is enabled by state file
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3962] settings: Loaded settings plugin: keyfile (internal)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3967] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3989] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.3998] dhcp: init: Using DHCP client 'internal'
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4000] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4006] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4010] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4018] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4025] device (eth0): carrier: link connected
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4031] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4035] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4036] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4040] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4046] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4053] device (eth1): carrier: link connected
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4057] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4062] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f) (indicated)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4062] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4066] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4072] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4080] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 13:26:53 compute-1 systemd[1]: Started Network Manager.
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4088] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4090] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4093] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4108] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4121] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4125] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4127] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4131] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4138] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4143] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4151] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4163] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4171] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4173] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4177] device (lo): Activation: successful, device activated.
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4188] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4193] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4271] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4275] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4277] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4279] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4281] device (eth1): Activation: successful, device activated.
Jan 22 13:26:53 compute-1 systemd[1]: Starting Network Manager Wait Online...
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4328] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4331] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4336] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4338] device (eth0): Activation: successful, device activated.
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4345] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 13:26:53 compute-1 NetworkManager[48926]: <info>  [1769088413.4348] manager: startup complete
Jan 22 13:26:53 compute-1 sudo[48906]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:53 compute-1 systemd[1]: Finished Network Manager Wait Online.
Jan 22 13:26:54 compute-1 sudo[49132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utbkzhwmuclkwtssmcyzbgsrnjwegmxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088413.7954333-465-185750214106657/AnsiballZ_dnf.py'
Jan 22 13:26:54 compute-1 sudo[49132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:54 compute-1 python3.9[49134]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:27:03 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:27:06 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:27:06 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:27:06 compute-1 systemd[1]: Reloading.
Jan 22 13:27:06 compute-1 systemd-rc-local-generator[49184]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:27:06 compute-1 systemd-sysv-generator[49187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:27:06 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:27:07 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:27:07 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:27:07 compute-1 systemd[1]: run-r4e5152777b5c49e69c3010147c72545b.service: Deactivated successfully.
Jan 22 13:27:07 compute-1 sudo[49132]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:08 compute-1 sudo[49592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahfaylbdsotukqfhplvxyxbysjipznwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088428.3399875-501-267353653134963/AnsiballZ_stat.py'
Jan 22 13:27:08 compute-1 sudo[49592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:08 compute-1 python3.9[49594]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:27:08 compute-1 sudo[49592]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:09 compute-1 sudo[49744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmzfdissxqghbjdwbpkojltwdbkhdcfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088429.125785-528-12300004502528/AnsiballZ_ini_file.py'
Jan 22 13:27:09 compute-1 sudo[49744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:09 compute-1 python3.9[49746]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:09 compute-1 sudo[49744]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:10 compute-1 sudo[49898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igthayehqtxmccgulfzorrcinldbxfse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088430.1951432-558-16725281640744/AnsiballZ_ini_file.py'
Jan 22 13:27:10 compute-1 sudo[49898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:10 compute-1 python3.9[49900]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:10 compute-1 sudo[49898]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:11 compute-1 sudo[50050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uquqpccjbgtuvujqcxzgjjjgnyjnygvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088431.019616-558-229775379457984/AnsiballZ_ini_file.py'
Jan 22 13:27:11 compute-1 sudo[50050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:11 compute-1 python3.9[50052]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:11 compute-1 sudo[50050]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:12 compute-1 sudo[50202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdxziuiesemdpxcvwekxjeomwttreddg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088431.7629526-603-84251921595217/AnsiballZ_ini_file.py'
Jan 22 13:27:12 compute-1 sudo[50202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:12 compute-1 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:12 compute-1 sudo[50202]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:12 compute-1 sudo[50354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgzvothsslyzrapaswrfbodrqznvhboz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088432.4456067-603-184453486631143/AnsiballZ_ini_file.py'
Jan 22 13:27:12 compute-1 sudo[50354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:12 compute-1 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:12 compute-1 sudo[50354]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:13 compute-1 sudo[50506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzvoyybyihdsazvjjonblyoxewinsgfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088433.2044976-648-177946868683446/AnsiballZ_stat.py'
Jan 22 13:27:13 compute-1 sudo[50506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:13 compute-1 python3.9[50508]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:13 compute-1 sudo[50506]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:14 compute-1 sudo[50629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkgmrvxktetrhdvkpmleiukmepgmpyvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088433.2044976-648-177946868683446/AnsiballZ_copy.py'
Jan 22 13:27:14 compute-1 sudo[50629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:14 compute-1 python3.9[50631]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088433.2044976-648-177946868683446/.source _original_basename=.zjt8zoce follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:14 compute-1 sudo[50629]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:15 compute-1 sudo[50781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexxcbvwwnfbiohrhtuxqxxdmmhwbiou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088434.7331433-693-20654757725963/AnsiballZ_file.py'
Jan 22 13:27:15 compute-1 sudo[50781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:15 compute-1 python3.9[50783]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:15 compute-1 sudo[50781]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:16 compute-1 sudo[50933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dngxvczpsljtnqldhgkasrryapacuhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088435.8181489-717-234845721634846/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 22 13:27:16 compute-1 sudo[50933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:16 compute-1 python3.9[50935]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 13:27:16 compute-1 sudo[50933]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:17 compute-1 sudo[51085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcxrucxbawqoalhyyhgqfaoowxyjxbse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088436.8060176-744-183463239919868/AnsiballZ_file.py'
Jan 22 13:27:17 compute-1 sudo[51085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:17 compute-1 python3.9[51087]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:17 compute-1 sudo[51085]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:18 compute-1 sudo[51237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqytpacrjcslilgeejoihdwcpnazbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088437.7449484-774-188765188018177/AnsiballZ_stat.py'
Jan 22 13:27:18 compute-1 sudo[51237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:18 compute-1 sudo[51237]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:19 compute-1 sudo[51360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxlwyjiqhdqgfsnqrdvaiwtczsrqhqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088437.7449484-774-188765188018177/AnsiballZ_copy.py'
Jan 22 13:27:19 compute-1 sudo[51360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:19 compute-1 sudo[51360]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:20 compute-1 sudo[51512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvaukbaqpkxeugeorulcxalmpwsixya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088439.4967752-819-76933654798819/AnsiballZ_slurp.py'
Jan 22 13:27:20 compute-1 sudo[51512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:20 compute-1 python3.9[51514]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 13:27:20 compute-1 sudo[51512]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:21 compute-1 sudo[51687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwakjswlmzheybjsbztbaalyaxwpmtzk ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5615606-846-197457388192672/async_wrapper.py j277768451889 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5615606-846-197457388192672/AnsiballZ_edpm_os_net_config.py _'
Jan 22 13:27:21 compute-1 sudo[51687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:21 compute-1 ansible-async_wrapper.py[51689]: Invoked with j277768451889 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5615606-846-197457388192672/AnsiballZ_edpm_os_net_config.py _
Jan 22 13:27:21 compute-1 ansible-async_wrapper.py[51692]: Starting module and watcher
Jan 22 13:27:21 compute-1 ansible-async_wrapper.py[51692]: Start watching 51693 (300)
Jan 22 13:27:21 compute-1 ansible-async_wrapper.py[51693]: Start module (51693)
Jan 22 13:27:21 compute-1 ansible-async_wrapper.py[51689]: Return async_wrapper task started.
Jan 22 13:27:21 compute-1 sudo[51687]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:22 compute-1 python3.9[51694]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 13:27:22 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 13:27:22 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 13:27:22 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 13:27:22 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 13:27:22 compute-1 kernel: cfg80211: failed to load regulatory.db
Jan 22 13:27:23 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9098] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9112] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9575] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9578] audit: op="connection-add" uuid="f3a1b4c8-6898-43c8-a145-cff6493db8d5" name="br-ex-br" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9602] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9604] audit: op="connection-add" uuid="89645a98-362a-4a90-ad96-b42765a6e74e" name="br-ex-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9619] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9621] audit: op="connection-add" uuid="9803fa93-e62a-4987-9a66-8739bb27254a" name="eth1-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9632] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9633] audit: op="connection-add" uuid="784088bf-7c7d-46bf-a830-24509eb2750b" name="vlan20-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9643] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9645] audit: op="connection-add" uuid="56cde7b4-6262-426f-956e-5f1c36f70304" name="vlan21-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9655] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9656] audit: op="connection-add" uuid="ca2d6919-0e35-4f6c-b239-db4784ee9143" name="vlan22-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9665] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9667] audit: op="connection-add" uuid="1521fdf4-2e0c-41c9-bc78-9fc63a3e68f3" name="vlan23-port" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9690] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9705] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9707] audit: op="connection-add" uuid="ed8ab3e7-d1ec-48db-ab69-d8b86554973c" name="br-ex-if" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9758] audit: op="connection-update" uuid="ca5780bd-10f2-5d02-a1d0-e241b484666f" name="ci-private-network" args="ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.routing-rules,connection.master,connection.port-type,connection.controller,connection.timestamp,connection.slave-type,ipv6.addresses,ipv6.dns,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9774] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9776] audit: op="connection-add" uuid="3899566f-b038-40e1-8d3a-797a9203ea2d" name="vlan20-if" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9790] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9791] audit: op="connection-add" uuid="28614ea6-7886-4bf9-9d35-18dd738908ba" name="vlan21-if" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9807] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9809] audit: op="connection-add" uuid="550bda04-7f08-4470-b6be-0d94b7fdd799" name="vlan22-if" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9825] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9826] audit: op="connection-add" uuid="45593d9f-ff07-464b-939b-e5a9bc1f4ea5" name="vlan23-if" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9839] audit: op="connection-delete" uuid="22966868-29c6-340d-be5e-bba5c29bb571" name="Wired connection 1" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9853] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9858] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9866] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9869] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f3a1b4c8-6898-43c8-a145-cff6493db8d5)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9869] audit: op="connection-activate" uuid="f3a1b4c8-6898-43c8-a145-cff6493db8d5" name="br-ex-br" pid=51695 uid=0 result="success"
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9871] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9872] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9876] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9880] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (89645a98-362a-4a90-ad96-b42765a6e74e)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9882] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9883] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9887] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9891] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9803fa93-e62a-4987-9a66-8739bb27254a)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9893] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9894] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9899] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9903] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (784088bf-7c7d-46bf-a830-24509eb2750b)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9904] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9905] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9910] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9914] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (56cde7b4-6262-426f-956e-5f1c36f70304)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9916] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9917] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9921] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9925] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ca2d6919-0e35-4f6c-b239-db4784ee9143)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9927] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9928] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9933] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9937] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (1521fdf4-2e0c-41c9-bc78-9fc63a3e68f3)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9939] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9941] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9943] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9950] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <warn>  [1769088443.9951] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9953] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9958] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ed8ab3e7-d1ec-48db-ab69-d8b86554973c)
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9958] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9962] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9963] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9965] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9966] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9987] device (eth1): disconnecting for new activation request.
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9988] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9991] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9993] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9995] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 13:27:23 compute-1 NetworkManager[48926]: <info>  [1769088443.9998] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <warn>  [1769088444.0000] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0003] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0009] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3899566f-b038-40e1-8d3a-797a9203ea2d)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0010] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0013] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0016] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0017] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0021] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <warn>  [1769088444.0022] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0026] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0032] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (28614ea6-7886-4bf9-9d35-18dd738908ba)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0033] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0036] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0038] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0040] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0044] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <warn>  [1769088444.0045] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0049] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0054] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (550bda04-7f08-4470-b6be-0d94b7fdd799)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0055] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0059] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0061] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0063] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0067] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <warn>  [1769088444.0068] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0072] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0078] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (45593d9f-ff07-464b-939b-e5a9bc1f4ea5)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0079] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0082] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0084] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0086] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0089] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0105] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51695 uid=0 result="success"
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0108] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0113] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0115] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0124] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0130] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0135] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0140] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0142] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0147] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0151] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 kernel: ovs-system: entered promiscuous mode
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0168] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0170] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0175] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0181] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 kernel: Timeout policy base is empty
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0185] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0188] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0194] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0199] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0203] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 systemd-udevd[51701]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0205] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0211] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0217] dhcp4 (eth0): canceled DHCP transaction
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0218] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0218] dhcp4 (eth0): state changed no lease
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0221] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 13:27:24 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0235] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0239] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51695 uid=0 result="fail" reason="Device is not activated"
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0283] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0292] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0295] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0299] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0304] device (eth1): disconnecting for new activation request.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0305] audit: op="connection-activate" uuid="ca5780bd-10f2-5d02-a1d0-e241b484666f" name="ci-private-network" pid=51695 uid=0 result="success"
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0353] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 13:27:24 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0387] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0388] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0578] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0586] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0596] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0601] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0611] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0615] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0620] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0622] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0624] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0626] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0627] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0629] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0639] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0646] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0649] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0652] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0654] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0660] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0684] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0690] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0696] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0701] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0704] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0709] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0713] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0721] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0729] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0775] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0778] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 kernel: br-ex: entered promiscuous mode
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0798] device (eth1): Activation: successful, device activated.
Jan 22 13:27:24 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 13:27:24 compute-1 kernel: vlan22: entered promiscuous mode
Jan 22 13:27:24 compute-1 systemd-udevd[51699]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:24 compute-1 kernel: vlan20: entered promiscuous mode
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0967] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.0984] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 systemd-udevd[51700]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1013] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1016] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 kernel: vlan23: entered promiscuous mode
Jan 22 13:27:24 compute-1 systemd-udevd[51807]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1032] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 kernel: vlan21: entered promiscuous mode
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1121] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1138] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1150] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1163] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1194] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1201] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1202] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1208] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1214] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1220] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1225] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1237] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1290] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1291] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1294] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1300] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1318] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1353] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1354] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-1 NetworkManager[48926]: <info>  [1769088444.1359] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:25 compute-1 NetworkManager[48926]: <info>  [1769088445.2683] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 13:27:25 compute-1 sudo[52054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vejjxhdlwpsxicgtdhdkcucsyoqkovpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0314174-846-207543733066250/AnsiballZ_async_status.py'
Jan 22 13:27:25 compute-1 sudo[52054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:25 compute-1 NetworkManager[48926]: <info>  [1769088445.5249] checkpoint[0x557f129cd950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 13:27:25 compute-1 NetworkManager[48926]: <info>  [1769088445.5253] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 13:27:25 compute-1 python3.9[52056]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=status _async_dir=/root/.ansible_async
Jan 22 13:27:25 compute-1 sudo[52054]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:25 compute-1 NetworkManager[48926]: <info>  [1769088445.8283] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 13:27:25 compute-1 NetworkManager[48926]: <info>  [1769088445.8296] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.4681] audit: op="networking-control" arg="global-dns-configuration" pid=51695 uid=0 result="success"
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.4818] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.5156] audit: op="networking-control" arg="global-dns-configuration" pid=51695 uid=0 result="success"
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.5990] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.7721] checkpoint[0x557f129cda20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 13:27:26 compute-1 NetworkManager[48926]: <info>  [1769088446.7727] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 13:27:26 compute-1 ansible-async_wrapper.py[51692]: 51693 still running (300)
Jan 22 13:27:26 compute-1 ansible-async_wrapper.py[51693]: Module complete (51693)
Jan 22 13:27:29 compute-1 sudo[52160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liphdohniqcjrrqawzbxvynwpycbjskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0314174-846-207543733066250/AnsiballZ_async_status.py'
Jan 22 13:27:29 compute-1 sudo[52160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:29 compute-1 python3.9[52162]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=status _async_dir=/root/.ansible_async
Jan 22 13:27:29 compute-1 sudo[52160]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:29 compute-1 sudo[52260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjgmarhoufrcppyspkxgcffyojppzos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0314174-846-207543733066250/AnsiballZ_async_status.py'
Jan 22 13:27:29 compute-1 sudo[52260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:29 compute-1 python3.9[52262]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 13:27:29 compute-1 sudo[52260]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:30 compute-1 sudo[52412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfqyugouxkdntafdkbyyyqosfeansso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088450.159991-927-57547827339727/AnsiballZ_stat.py'
Jan 22 13:27:30 compute-1 sudo[52412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:30 compute-1 python3.9[52414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:30 compute-1 sudo[52412]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:31 compute-1 sudo[52535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhkecvsnygenostihykobrihbufjmylg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088450.159991-927-57547827339727/AnsiballZ_copy.py'
Jan 22 13:27:31 compute-1 sudo[52535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:31 compute-1 python3.9[52537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088450.159991-927-57547827339727/.source.returncode _original_basename=.95myxv3s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:31 compute-1 sudo[52535]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:31 compute-1 ansible-async_wrapper.py[51692]: Done in kid B.
Jan 22 13:27:32 compute-1 sudo[52687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsxxmrzjhixohwphtatkmgbbenrfopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088451.8722432-975-49227606299265/AnsiballZ_stat.py'
Jan 22 13:27:32 compute-1 sudo[52687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:32 compute-1 python3.9[52689]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:32 compute-1 sudo[52687]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:32 compute-1 sudo[52811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxsjrgelijbosvaagzwnhcaqkigolwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088451.8722432-975-49227606299265/AnsiballZ_copy.py'
Jan 22 13:27:32 compute-1 sudo[52811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:32 compute-1 python3.9[52813]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088451.8722432-975-49227606299265/.source.cfg _original_basename=.6qp1vol0 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:32 compute-1 sudo[52811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:33 compute-1 sudo[52963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afenpxszfgbjaawvncwguvfvbkmtdjtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088453.3352-1020-48021764803764/AnsiballZ_systemd.py'
Jan 22 13:27:33 compute-1 sudo[52963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:33 compute-1 python3.9[52965]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:27:33 compute-1 systemd[1]: Reloading Network Manager...
Jan 22 13:27:34 compute-1 NetworkManager[48926]: <info>  [1769088454.0212] audit: op="reload" arg="0" pid=52969 uid=0 result="success"
Jan 22 13:27:34 compute-1 NetworkManager[48926]: <info>  [1769088454.0223] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 13:27:34 compute-1 systemd[1]: Reloaded Network Manager.
Jan 22 13:27:34 compute-1 sudo[52963]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:35 compute-1 sshd-session[44920]: Connection closed by 192.168.122.30 port 54002
Jan 22 13:27:35 compute-1 sshd-session[44917]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:27:35 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 13:27:35 compute-1 systemd[1]: session-11.scope: Consumed 59.059s CPU time.
Jan 22 13:27:35 compute-1 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Jan 22 13:27:35 compute-1 systemd-logind[787]: Removed session 11.
Jan 22 13:27:40 compute-1 sshd-session[53000]: Accepted publickey for zuul from 192.168.122.30 port 37296 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:27:40 compute-1 systemd-logind[787]: New session 12 of user zuul.
Jan 22 13:27:40 compute-1 systemd[1]: Started Session 12 of User zuul.
Jan 22 13:27:40 compute-1 sshd-session[53000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:27:41 compute-1 python3.9[53153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:42 compute-1 python3.9[53308]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:44 compute-1 python3.9[53501]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:27:44 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:27:44 compute-1 sshd-session[53003]: Connection closed by 192.168.122.30 port 37296
Jan 22 13:27:44 compute-1 sshd-session[53000]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:27:44 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 13:27:44 compute-1 systemd[1]: session-12.scope: Consumed 2.459s CPU time.
Jan 22 13:27:44 compute-1 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Jan 22 13:27:44 compute-1 systemd-logind[787]: Removed session 12.
Jan 22 13:27:50 compute-1 sshd-session[53530]: Accepted publickey for zuul from 192.168.122.30 port 50606 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:27:50 compute-1 systemd-logind[787]: New session 13 of user zuul.
Jan 22 13:27:50 compute-1 systemd[1]: Started Session 13 of User zuul.
Jan 22 13:27:50 compute-1 sshd-session[53530]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:27:51 compute-1 python3.9[53683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:52 compute-1 python3.9[53838]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:53 compute-1 sudo[53992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyheqjzbajlycacmbqlhjfmawgrzxhyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088472.8306875-80-127560080493786/AnsiballZ_setup.py'
Jan 22 13:27:53 compute-1 sudo[53992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:53 compute-1 python3.9[53994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:53 compute-1 sudo[53992]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:54 compute-1 sudo[54076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlrijvkoelmgopjpcidwoqftlncrrvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088472.8306875-80-127560080493786/AnsiballZ_dnf.py'
Jan 22 13:27:54 compute-1 sudo[54076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:54 compute-1 python3.9[54078]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:27:56 compute-1 sudo[54076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:56 compute-1 sudo[54230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idooskiqdkfyfdaacvpoetmequwxwyam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088476.584673-116-269748547533725/AnsiballZ_setup.py'
Jan 22 13:27:56 compute-1 sudo[54230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:57 compute-1 python3.9[54232]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:57 compute-1 sudo[54230]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:59 compute-1 sudo[54425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizughfrwkgsjzitoiepvppkollonjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088478.7268782-149-43203596132187/AnsiballZ_file.py'
Jan 22 13:27:59 compute-1 sudo[54425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:59 compute-1 python3.9[54427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:59 compute-1 sudo[54425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:59 compute-1 sudo[54577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucutymqhsnntmbzxqxcjvxlcegblzmnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088479.4430318-173-224843600952389/AnsiballZ_command.py'
Jan 22 13:27:59 compute-1 sudo[54577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:00 compute-1 python3.9[54579]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:28:00 compute-1 podman[54580]: 2026-01-22 13:28:00.221904366 +0000 UTC m=+0.069431488 system refresh
Jan 22 13:28:00 compute-1 sudo[54577]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:28:01 compute-1 sudo[54739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogbssfbnpljjxsrmrqnlxpncnybglos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088480.9074833-197-277852061083688/AnsiballZ_stat.py'
Jan 22 13:28:01 compute-1 sudo[54739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:01 compute-1 python3.9[54741]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:01 compute-1 sudo[54739]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:02 compute-1 sudo[54862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpyjcsssgoxfsemxxbzslfsymjypaxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088480.9074833-197-277852061083688/AnsiballZ_copy.py'
Jan 22 13:28:02 compute-1 sudo[54862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:02 compute-1 python3.9[54864]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088480.9074833-197-277852061083688/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f4ccbdd6e115f5848572a062f4ef89a06a1003e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:02 compute-1 sudo[54862]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:02 compute-1 sudo[55014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upapwxwinujavomjhyevccdtmvcqkipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088482.6561015-242-126169276907312/AnsiballZ_stat.py'
Jan 22 13:28:02 compute-1 sudo[55014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:03 compute-1 python3.9[55016]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:03 compute-1 sudo[55014]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:03 compute-1 sudo[55137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offxsfdweatqeidhpvpfetjxujkeicnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088482.6561015-242-126169276907312/AnsiballZ_copy.py'
Jan 22 13:28:03 compute-1 sudo[55137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:03 compute-1 python3.9[55139]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088482.6561015-242-126169276907312/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a3e69bacb50e2daad69ea0ffc6501536059b061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:03 compute-1 sudo[55137]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:04 compute-1 sudo[55289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxoxfjecwukaduaylpnxwnoxjhtkqluq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088484.075104-290-82803163223062/AnsiballZ_ini_file.py'
Jan 22 13:28:04 compute-1 sudo[55289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:04 compute-1 python3.9[55291]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:04 compute-1 sudo[55289]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:05 compute-1 sudo[55441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcsjnxrguzvoyohsfmgqodhnxpuyaahi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088484.8765998-290-197573518280015/AnsiballZ_ini_file.py'
Jan 22 13:28:05 compute-1 sudo[55441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:05 compute-1 python3.9[55443]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:05 compute-1 sudo[55441]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:05 compute-1 sudo[55593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtmlnnamxtxosmocpksegdrcshrykrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088485.6987715-290-68320322211470/AnsiballZ_ini_file.py'
Jan 22 13:28:05 compute-1 sudo[55593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:06 compute-1 python3.9[55595]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:06 compute-1 sudo[55593]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:06 compute-1 sudo[55745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnfrrlatbdbtxgwvljjvqswkebbrhfor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088486.355479-290-210591095529423/AnsiballZ_ini_file.py'
Jan 22 13:28:06 compute-1 sudo[55745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:06 compute-1 python3.9[55747]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:06 compute-1 sudo[55745]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:07 compute-1 sudo[55897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhfrsciwqwchxqsslwudiefgwlnahtmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088487.2446275-383-6375726158007/AnsiballZ_dnf.py'
Jan 22 13:28:07 compute-1 sudo[55897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:07 compute-1 python3.9[55899]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:28:09 compute-1 sudo[55897]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:10 compute-1 sudo[56050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdetybjsaxqvrvgqswlljecfpunwkzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088489.8327353-416-130615172011900/AnsiballZ_setup.py'
Jan 22 13:28:10 compute-1 sudo[56050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:10 compute-1 python3.9[56052]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:28:10 compute-1 sudo[56050]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:10 compute-1 sudo[56204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhcpsfpfnjikzxmsxqcgvpqwjasnpku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088490.702734-440-71253239654638/AnsiballZ_stat.py'
Jan 22 13:28:10 compute-1 sudo[56204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:11 compute-1 python3.9[56206]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:28:11 compute-1 sudo[56204]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:11 compute-1 sudo[56356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnzglhgkpotgyshcobjlyiktgjpcwhfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088491.5130103-468-77372007462739/AnsiballZ_stat.py'
Jan 22 13:28:11 compute-1 sudo[56356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:11 compute-1 python3.9[56358]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:28:11 compute-1 sudo[56356]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:12 compute-1 sudo[56508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebaosaaqerysvhsdsuhdwwgpjvlllyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088492.3254604-497-143949279362027/AnsiballZ_command.py'
Jan 22 13:28:12 compute-1 sudo[56508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:12 compute-1 python3.9[56510]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:28:12 compute-1 sudo[56508]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:13 compute-1 sudo[56661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndkalnuriznsrsxoxzymffjreukrgndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088493.2681816-527-206128695139889/AnsiballZ_service_facts.py'
Jan 22 13:28:13 compute-1 sudo[56661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:13 compute-1 python3.9[56663]: ansible-service_facts Invoked
Jan 22 13:28:13 compute-1 network[56680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:28:13 compute-1 network[56681]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:28:13 compute-1 network[56682]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:28:16 compute-1 sudo[56661]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:19 compute-1 sudo[56965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rliwyjvhfthsocrhswlcnlzzqdoxbyfx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769088499.0475214-572-257441210157240/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769088499.0475214-572-257441210157240/args'
Jan 22 13:28:19 compute-1 sudo[56965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:19 compute-1 sudo[56965]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:20 compute-1 sudo[57132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebzjjxcabcqazfmmagistpcxhqxaeja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088499.833156-605-52043763283687/AnsiballZ_dnf.py'
Jan 22 13:28:20 compute-1 sudo[57132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:20 compute-1 python3.9[57134]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:28:21 compute-1 sudo[57132]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:23 compute-1 sudo[57285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zerfrwvszwxasqndizradgkfidwhhnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088502.6339495-644-237950075253180/AnsiballZ_package_facts.py'
Jan 22 13:28:23 compute-1 sudo[57285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:23 compute-1 python3.9[57287]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 13:28:23 compute-1 sudo[57285]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:25 compute-1 sudo[57437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyvrtpgvdfegcntuyirbthbgzvjwjmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088504.8943295-674-184524665739314/AnsiballZ_stat.py'
Jan 22 13:28:25 compute-1 sudo[57437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:25 compute-1 python3.9[57439]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:25 compute-1 sudo[57437]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:25 compute-1 sudo[57562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntucigwztkuwxmzzlcoqjqivxpcuyihg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088504.8943295-674-184524665739314/AnsiballZ_copy.py'
Jan 22 13:28:25 compute-1 sudo[57562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:26 compute-1 python3.9[57564]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088504.8943295-674-184524665739314/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:26 compute-1 sudo[57562]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:26 compute-1 sudo[57716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oijfhxodoclijjugpqgspzttsubelisi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088506.425851-720-69465525638892/AnsiballZ_stat.py'
Jan 22 13:28:26 compute-1 sudo[57716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:26 compute-1 python3.9[57718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:27 compute-1 sudo[57716]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:27 compute-1 sudo[57841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjdiausqsvsbfuojkwqxjouuleettbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088506.425851-720-69465525638892/AnsiballZ_copy.py'
Jan 22 13:28:27 compute-1 sudo[57841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:27 compute-1 python3.9[57843]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088506.425851-720-69465525638892/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:27 compute-1 sudo[57841]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:29 compute-1 sudo[57995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwrcnramsgkdalvpjglitgdslrxdxyeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088508.7523067-784-279476792688884/AnsiballZ_lineinfile.py'
Jan 22 13:28:29 compute-1 sudo[57995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:29 compute-1 python3.9[57997]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:29 compute-1 sudo[57995]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:30 compute-1 sudo[58149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sohjvqczfwkiaafnndfqzcjetdtfyduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088510.6441803-828-97570324789947/AnsiballZ_setup.py'
Jan 22 13:28:30 compute-1 sudo[58149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:31 compute-1 python3.9[58151]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:28:31 compute-1 sudo[58149]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:32 compute-1 sudo[58233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szhhdfpdynsmvspglolktinwvighhaxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088510.6441803-828-97570324789947/AnsiballZ_systemd.py'
Jan 22 13:28:32 compute-1 sudo[58233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:32 compute-1 python3.9[58235]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:28:32 compute-1 sudo[58233]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:33 compute-1 sudo[58387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofnxpmenjgmfafnkyzcxpfutsswmwze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088513.438752-876-50090675772262/AnsiballZ_setup.py'
Jan 22 13:28:33 compute-1 sudo[58387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:34 compute-1 python3.9[58389]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:28:34 compute-1 sudo[58387]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:34 compute-1 sudo[58471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkuupoyvpiitscwrlzqqwqqfnfncqufi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088513.438752-876-50090675772262/AnsiballZ_systemd.py'
Jan 22 13:28:34 compute-1 sudo[58471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:34 compute-1 python3.9[58473]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:28:34 compute-1 chronyd[807]: chronyd exiting
Jan 22 13:28:34 compute-1 systemd[1]: Stopping NTP client/server...
Jan 22 13:28:34 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 13:28:34 compute-1 systemd[1]: Stopped NTP client/server.
Jan 22 13:28:34 compute-1 systemd[1]: Starting NTP client/server...
Jan 22 13:28:35 compute-1 chronyd[58482]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 13:28:35 compute-1 chronyd[58482]: Frequency -26.955 +/- 0.107 ppm read from /var/lib/chrony/drift
Jan 22 13:28:35 compute-1 chronyd[58482]: Loaded seccomp filter (level 2)
Jan 22 13:28:35 compute-1 systemd[1]: Started NTP client/server.
Jan 22 13:28:35 compute-1 sudo[58471]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:35 compute-1 sshd-session[53533]: Connection closed by 192.168.122.30 port 50606
Jan 22 13:28:35 compute-1 sshd-session[53530]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:28:35 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 13:28:35 compute-1 systemd[1]: session-13.scope: Consumed 27.764s CPU time.
Jan 22 13:28:35 compute-1 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Jan 22 13:28:35 compute-1 systemd-logind[787]: Removed session 13.
Jan 22 13:28:41 compute-1 sshd-session[58508]: Accepted publickey for zuul from 192.168.122.30 port 48920 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:28:41 compute-1 systemd-logind[787]: New session 14 of user zuul.
Jan 22 13:28:41 compute-1 systemd[1]: Started Session 14 of User zuul.
Jan 22 13:28:41 compute-1 sshd-session[58508]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:28:41 compute-1 sudo[58661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpgershxrwofjpobzjfhsiwdcyizgqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088521.4163015-27-78202494247588/AnsiballZ_file.py'
Jan 22 13:28:41 compute-1 sudo[58661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:42 compute-1 python3.9[58663]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:42 compute-1 sudo[58661]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:44 compute-1 sudo[58813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyvjlqlfruimsyryudqidfbpptkwypqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088523.690781-63-93912219811998/AnsiballZ_stat.py'
Jan 22 13:28:44 compute-1 sudo[58813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:44 compute-1 python3.9[58815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:44 compute-1 sudo[58813]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:44 compute-1 sudo[58936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdtfbfkfeuupdsdftihqcxnoeazkyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088523.690781-63-93912219811998/AnsiballZ_copy.py'
Jan 22 13:28:44 compute-1 sudo[58936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:45 compute-1 python3.9[58938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088523.690781-63-93912219811998/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:45 compute-1 sudo[58936]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:45 compute-1 sshd-session[58511]: Connection closed by 192.168.122.30 port 48920
Jan 22 13:28:45 compute-1 sshd-session[58508]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:28:45 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 13:28:45 compute-1 systemd[1]: session-14.scope: Consumed 1.704s CPU time.
Jan 22 13:28:45 compute-1 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Jan 22 13:28:45 compute-1 systemd-logind[787]: Removed session 14.
Jan 22 13:28:51 compute-1 sshd-session[58963]: Accepted publickey for zuul from 192.168.122.30 port 52348 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:28:51 compute-1 systemd-logind[787]: New session 15 of user zuul.
Jan 22 13:28:51 compute-1 systemd[1]: Started Session 15 of User zuul.
Jan 22 13:28:51 compute-1 sshd-session[58963]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:28:52 compute-1 python3.9[59116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:28:53 compute-1 sudo[59270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzifombmlelrktoesnaiojjlrtizylhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088533.2239652-60-207770259586730/AnsiballZ_file.py'
Jan 22 13:28:53 compute-1 sudo[59270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:53 compute-1 python3.9[59272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:53 compute-1 sudo[59270]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:54 compute-1 sudo[59445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmnehokazdwnttmerhqvsjzzrxajczyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088534.2928286-84-245382725081599/AnsiballZ_stat.py'
Jan 22 13:28:54 compute-1 sudo[59445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:55 compute-1 python3.9[59447]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:55 compute-1 sudo[59445]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:55 compute-1 sudo[59568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guytcmjnxgtxxylbdtpbbxokpkkuelgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088534.2928286-84-245382725081599/AnsiballZ_copy.py'
Jan 22 13:28:55 compute-1 sudo[59568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:55 compute-1 python3.9[59570]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769088534.2928286-84-245382725081599/.source.json _original_basename=.66ptb2bq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:55 compute-1 sudo[59568]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:56 compute-1 sudo[59720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkkmsyysnixbquycwtaxwwjzptsgnnwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088536.4545789-153-93563072989677/AnsiballZ_stat.py'
Jan 22 13:28:56 compute-1 sudo[59720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:56 compute-1 python3.9[59722]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:57 compute-1 sudo[59720]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:57 compute-1 sudo[59843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlajonsprdxxqlwrlxurfxivclcdtzbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088536.4545789-153-93563072989677/AnsiballZ_copy.py'
Jan 22 13:28:57 compute-1 sudo[59843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:57 compute-1 python3.9[59845]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088536.4545789-153-93563072989677/.source _original_basename=.xj51wf45 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:57 compute-1 sudo[59843]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:58 compute-1 sudo[59995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjgxtlaibtepmcodksayqyoztsmydauw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088538.196624-201-134559597979860/AnsiballZ_file.py'
Jan 22 13:28:58 compute-1 sudo[59995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:58 compute-1 python3.9[59997]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:58 compute-1 sudo[59995]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:59 compute-1 sudo[60147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilfeerdjpnrtabzzzzndthuyrjwaclh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088539.0574381-225-258388828749454/AnsiballZ_stat.py'
Jan 22 13:28:59 compute-1 sudo[60147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:59 compute-1 python3.9[60149]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:59 compute-1 sudo[60147]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:00 compute-1 sudo[60270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clzjqyeydbruffrtjnrdpiujmjhrklnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088539.0574381-225-258388828749454/AnsiballZ_copy.py'
Jan 22 13:29:00 compute-1 sudo[60270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:00 compute-1 python3.9[60272]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088539.0574381-225-258388828749454/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:29:00 compute-1 sudo[60270]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:01 compute-1 sudo[60422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjgubolzztlbqvdrnhefebofxjwytips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088540.5408535-225-150098069100563/AnsiballZ_stat.py'
Jan 22 13:29:01 compute-1 sudo[60422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:01 compute-1 python3.9[60424]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:01 compute-1 sudo[60422]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:01 compute-1 sudo[60545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlskygwznvdsgwemrfkzvvlrpwdckzhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088540.5408535-225-150098069100563/AnsiballZ_copy.py'
Jan 22 13:29:01 compute-1 sudo[60545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:01 compute-1 python3.9[60547]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088540.5408535-225-150098069100563/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:29:01 compute-1 sudo[60545]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:02 compute-1 sudo[60697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbrjnivqzqyqzctbbeybotkmquilfsip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.1405108-312-281145193668322/AnsiballZ_file.py'
Jan 22 13:29:02 compute-1 sudo[60697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:02 compute-1 python3.9[60699]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:02 compute-1 sudo[60697]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:03 compute-1 sudo[60849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxsepmjsrqijythujxsjygmkyycsasw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.929871-337-249181427114140/AnsiballZ_stat.py'
Jan 22 13:29:03 compute-1 sudo[60849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:03 compute-1 python3.9[60851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:03 compute-1 sudo[60849]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:03 compute-1 sudo[60972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpvczljmrywdpokzinhmqjfqpdshigr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.929871-337-249181427114140/AnsiballZ_copy.py'
Jan 22 13:29:03 compute-1 sudo[60972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:03 compute-1 python3.9[60974]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088542.929871-337-249181427114140/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:04 compute-1 sudo[60972]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:05 compute-1 sudo[61124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hznrrvfsuqswpvcwvqgkwwqwwdibnvay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088545.6632912-381-13740477493226/AnsiballZ_stat.py'
Jan 22 13:29:05 compute-1 sudo[61124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:06 compute-1 python3.9[61126]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:06 compute-1 sudo[61124]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:06 compute-1 sudo[61247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzvzmnjuwrqgztrjavdimopstvnxxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088545.6632912-381-13740477493226/AnsiballZ_copy.py'
Jan 22 13:29:06 compute-1 sudo[61247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:06 compute-1 python3.9[61249]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088545.6632912-381-13740477493226/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:06 compute-1 sudo[61247]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:07 compute-1 sudo[61399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlwciuothehzsxpevlgsfifgtmwnhpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088547.0007017-426-179228103057186/AnsiballZ_systemd.py'
Jan 22 13:29:07 compute-1 sudo[61399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:07 compute-1 python3.9[61401]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:07 compute-1 systemd[1]: Reloading.
Jan 22 13:29:08 compute-1 systemd-rc-local-generator[61426]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:08 compute-1 systemd-sysv-generator[61431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:08 compute-1 systemd[1]: Reloading.
Jan 22 13:29:08 compute-1 systemd-rc-local-generator[61468]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:08 compute-1 systemd-sysv-generator[61472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:08 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 13:29:08 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 13:29:08 compute-1 sudo[61399]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:09 compute-1 sudo[61626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syeslpxwwuxigxciyagagvzvgusuyxis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088548.9482949-450-125501506428434/AnsiballZ_stat.py'
Jan 22 13:29:09 compute-1 sudo[61626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:09 compute-1 python3.9[61628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:09 compute-1 sudo[61626]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:09 compute-1 sudo[61749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtixrmcrmcamtsrwyausivpwdpqwakqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088548.9482949-450-125501506428434/AnsiballZ_copy.py'
Jan 22 13:29:09 compute-1 sudo[61749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:10 compute-1 python3.9[61751]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088548.9482949-450-125501506428434/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:10 compute-1 sudo[61749]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:10 compute-1 sudo[61901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblawvwjhdhfctlssgserevghbaziilh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088550.2340848-495-214433902483877/AnsiballZ_stat.py'
Jan 22 13:29:10 compute-1 sudo[61901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:10 compute-1 python3.9[61903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:10 compute-1 sudo[61901]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:11 compute-1 sudo[62024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icpllpntmokjscyetgfzjkzanrprziaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088550.2340848-495-214433902483877/AnsiballZ_copy.py'
Jan 22 13:29:11 compute-1 sudo[62024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:11 compute-1 python3.9[62026]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088550.2340848-495-214433902483877/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:11 compute-1 sudo[62024]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:11 compute-1 sudo[62176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjjypiqvoszycydyiebkrmgommtuaihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088551.6233525-540-269682371085606/AnsiballZ_systemd.py'
Jan 22 13:29:11 compute-1 sudo[62176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:12 compute-1 python3.9[62178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:12 compute-1 systemd[1]: Reloading.
Jan 22 13:29:12 compute-1 systemd-sysv-generator[62212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:12 compute-1 systemd-rc-local-generator[62207]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:12 compute-1 systemd[1]: Reloading.
Jan 22 13:29:12 compute-1 systemd-sysv-generator[62244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:12 compute-1 systemd-rc-local-generator[62240]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:12 compute-1 systemd[1]: Starting Create netns directory...
Jan 22 13:29:12 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:29:12 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:29:12 compute-1 systemd[1]: Finished Create netns directory.
Jan 22 13:29:12 compute-1 sudo[62176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:13 compute-1 python3.9[62406]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:29:13 compute-1 network[62423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:29:13 compute-1 network[62424]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:29:13 compute-1 network[62425]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:29:20 compute-1 sudo[62685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyenmiuhvjkszcgxvvlzjwvlsdtpraxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088559.8207877-588-98655129140101/AnsiballZ_systemd.py'
Jan 22 13:29:20 compute-1 sudo[62685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:20 compute-1 python3.9[62687]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:20 compute-1 systemd[1]: Reloading.
Jan 22 13:29:20 compute-1 systemd-sysv-generator[62721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:20 compute-1 systemd-rc-local-generator[62716]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:20 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 13:29:21 compute-1 iptables.init[62727]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 13:29:21 compute-1 iptables.init[62727]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 13:29:21 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 13:29:21 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 13:29:21 compute-1 sudo[62685]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:21 compute-1 sudo[62922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtyczpjzfjwddaeiaoqcvxbqigncofkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088561.4395952-588-158842701835095/AnsiballZ_systemd.py'
Jan 22 13:29:21 compute-1 sudo[62922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:22 compute-1 python3.9[62924]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:22 compute-1 sudo[62922]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:22 compute-1 sudo[63076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwbpswqmrdgsypmkdhflmcocotpfhuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088562.3845816-636-276853102081813/AnsiballZ_systemd.py'
Jan 22 13:29:22 compute-1 sudo[63076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:22 compute-1 python3.9[63078]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:23 compute-1 systemd[1]: Reloading.
Jan 22 13:29:23 compute-1 systemd-rc-local-generator[63102]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:23 compute-1 systemd-sysv-generator[63108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:23 compute-1 systemd[1]: Starting Netfilter Tables...
Jan 22 13:29:23 compute-1 systemd[1]: Finished Netfilter Tables.
Jan 22 13:29:23 compute-1 sudo[63076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:24 compute-1 sudo[63268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaiacpbhktewfbpxuxbpnninmmktgrng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088563.8995736-660-48583095390431/AnsiballZ_command.py'
Jan 22 13:29:24 compute-1 sudo[63268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:24 compute-1 python3.9[63270]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:24 compute-1 sudo[63268]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:30 compute-1 sudo[63421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lawmwsyimwdersufsmokjasbakectgqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088570.1152885-702-267498966640148/AnsiballZ_stat.py'
Jan 22 13:29:30 compute-1 sudo[63421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:30 compute-1 python3.9[63423]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:30 compute-1 sudo[63421]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:31 compute-1 sudo[63546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canmcobabcbslxgypykipugyvayabvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088570.1152885-702-267498966640148/AnsiballZ_copy.py'
Jan 22 13:29:31 compute-1 sudo[63546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:31 compute-1 python3.9[63548]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088570.1152885-702-267498966640148/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:31 compute-1 sudo[63546]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:32 compute-1 sudo[63699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghacscfhxxeoemlomvmuempiqafwavni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088571.9827588-747-280894824310687/AnsiballZ_systemd.py'
Jan 22 13:29:32 compute-1 sudo[63699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:32 compute-1 python3.9[63701]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:29:32 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 13:29:32 compute-1 sshd[1008]: Received SIGHUP; restarting.
Jan 22 13:29:32 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 13:29:32 compute-1 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 22 13:29:32 compute-1 sshd[1008]: Server listening on :: port 22.
Jan 22 13:29:32 compute-1 sudo[63699]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:33 compute-1 sudo[63855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfqecwofjyhrlsfgpgsgpditumecpydt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.023678-771-220586606559671/AnsiballZ_file.py'
Jan 22 13:29:33 compute-1 sudo[63855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:33 compute-1 python3.9[63857]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:33 compute-1 sudo[63855]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:34 compute-1 sudo[64007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbroqpjhmaqrwfsiuggfukihzsffcwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.6808894-795-45369300012057/AnsiballZ_stat.py'
Jan 22 13:29:34 compute-1 sudo[64007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:34 compute-1 python3.9[64009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:34 compute-1 sudo[64007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:34 compute-1 sudo[64130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojmpjpwsorngsndmzehywrfqtzogxdkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.6808894-795-45369300012057/AnsiballZ_copy.py'
Jan 22 13:29:34 compute-1 sudo[64130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:34 compute-1 python3.9[64132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088573.6808894-795-45369300012057/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:34 compute-1 sudo[64130]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:35 compute-1 sudo[64282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmscntempodgjohecvhjzyjainakfsvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088575.301378-849-228811765961142/AnsiballZ_timezone.py'
Jan 22 13:29:35 compute-1 sudo[64282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:35 compute-1 python3.9[64284]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 13:29:35 compute-1 systemd[1]: Starting Time & Date Service...
Jan 22 13:29:36 compute-1 systemd[1]: Started Time & Date Service.
Jan 22 13:29:36 compute-1 sudo[64282]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:36 compute-1 sudo[64438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfcxsfbgwnjlesngqczybfquiedymml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088576.4677134-876-192491582175433/AnsiballZ_file.py'
Jan 22 13:29:36 compute-1 sudo[64438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:36 compute-1 python3.9[64440]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:36 compute-1 sudo[64438]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:37 compute-1 sudo[64590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scstasdezzcflfktqhdrnefbtzgfdnpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088577.2663472-900-244669063018342/AnsiballZ_stat.py'
Jan 22 13:29:37 compute-1 sudo[64590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:37 compute-1 python3.9[64592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:37 compute-1 sudo[64590]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:38 compute-1 sudo[64713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcoioftfkupkskmhvujmdvjnxbqkilak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088577.2663472-900-244669063018342/AnsiballZ_copy.py'
Jan 22 13:29:38 compute-1 sudo[64713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:38 compute-1 python3.9[64715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088577.2663472-900-244669063018342/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:38 compute-1 sudo[64713]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:38 compute-1 sudo[64865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakrofdyrrujjwkkcrjqqznrwqcjeyxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088578.6459265-945-248789055637226/AnsiballZ_stat.py'
Jan 22 13:29:38 compute-1 sudo[64865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:39 compute-1 python3.9[64867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:39 compute-1 sudo[64865]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:39 compute-1 sudo[64988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmtbuaixptzlovhyqjsenjvgtmebbexk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088578.6459265-945-248789055637226/AnsiballZ_copy.py'
Jan 22 13:29:39 compute-1 sudo[64988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:39 compute-1 python3.9[64990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088578.6459265-945-248789055637226/.source.yaml _original_basename=.n05rcse9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:39 compute-1 sudo[64988]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:40 compute-1 sudo[65140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxujkcywlgettrrbxpfnxaviawdwdjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088579.943893-990-38286361329198/AnsiballZ_stat.py'
Jan 22 13:29:40 compute-1 sudo[65140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:40 compute-1 python3.9[65142]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:40 compute-1 sudo[65140]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:40 compute-1 sudo[65263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beydegmlkuzybchbmzhfuxcsolpqgvrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088579.943893-990-38286361329198/AnsiballZ_copy.py'
Jan 22 13:29:40 compute-1 sudo[65263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:40 compute-1 python3.9[65265]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088579.943893-990-38286361329198/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:41 compute-1 sudo[65263]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:41 compute-1 sudo[65415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcwttxmvzddmeygkxwejmrhjgerbskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088581.199022-1035-280542237954588/AnsiballZ_command.py'
Jan 22 13:29:41 compute-1 sudo[65415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:41 compute-1 python3.9[65417]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:41 compute-1 sudo[65415]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:42 compute-1 sudo[65568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruqjazxgqzamspqyjedjwiqyvqtwefsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088581.9659045-1059-108590063377455/AnsiballZ_command.py'
Jan 22 13:29:42 compute-1 sudo[65568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:42 compute-1 python3.9[65570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:42 compute-1 sudo[65568]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:43 compute-1 sudo[65721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtymvinossdswuryhwzkwhcsspyswksn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769088582.847931-1083-29344187546199/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:29:43 compute-1 sudo[65721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:43 compute-1 python3[65723]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:29:43 compute-1 sudo[65721]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:44 compute-1 sudo[65873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oacovfmtqnixlqrpmmvwtyziwvdfylrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088583.7906756-1107-181337619762922/AnsiballZ_stat.py'
Jan 22 13:29:44 compute-1 sudo[65873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:44 compute-1 python3.9[65875]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:44 compute-1 sudo[65873]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:44 compute-1 sudo[65996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-earmwjgkmktojjjmpwghmdzbpzivyvsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088583.7906756-1107-181337619762922/AnsiballZ_copy.py'
Jan 22 13:29:44 compute-1 sudo[65996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:44 compute-1 python3.9[65998]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088583.7906756-1107-181337619762922/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:44 compute-1 sudo[65996]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:45 compute-1 sudo[66148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhuzuojuteafyyzewolpqizhhlcifyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088585.1647785-1153-279933507833880/AnsiballZ_stat.py'
Jan 22 13:29:45 compute-1 sudo[66148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:45 compute-1 python3.9[66150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:45 compute-1 sudo[66148]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:46 compute-1 sudo[66271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfphqaurhqjorakwgieurfjqmpzxnmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088585.1647785-1153-279933507833880/AnsiballZ_copy.py'
Jan 22 13:29:46 compute-1 sudo[66271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:46 compute-1 python3.9[66273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088585.1647785-1153-279933507833880/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:46 compute-1 sudo[66271]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:47 compute-1 sudo[66423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esornnwgauuugyfotrtczuhwofhzsdin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088586.7150722-1197-75211208544196/AnsiballZ_stat.py'
Jan 22 13:29:47 compute-1 sudo[66423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:47 compute-1 python3.9[66425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:47 compute-1 sudo[66423]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:47 compute-1 sudo[66546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnbytyuglbbcqnrlcgninvouytankqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088586.7150722-1197-75211208544196/AnsiballZ_copy.py'
Jan 22 13:29:47 compute-1 sudo[66546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:47 compute-1 python3.9[66548]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088586.7150722-1197-75211208544196/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:47 compute-1 sudo[66546]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:48 compute-1 sudo[66698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpwvxcmtfyqojlhmdxusqyqsyuziriyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088587.9993486-1242-141232962556755/AnsiballZ_stat.py'
Jan 22 13:29:48 compute-1 sudo[66698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:48 compute-1 python3.9[66700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:48 compute-1 sudo[66698]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:48 compute-1 sudo[66821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkivmstxdqimeiudledhnpzjodwqpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088587.9993486-1242-141232962556755/AnsiballZ_copy.py'
Jan 22 13:29:48 compute-1 sudo[66821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:49 compute-1 python3.9[66823]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088587.9993486-1242-141232962556755/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:49 compute-1 sudo[66821]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:49 compute-1 sudo[66973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydtejlcymzdwvhqspeltwsbgounoopls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088589.3706844-1287-128441354822573/AnsiballZ_stat.py'
Jan 22 13:29:49 compute-1 sudo[66973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:49 compute-1 python3.9[66975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:49 compute-1 sudo[66973]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:50 compute-1 sudo[67096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbijtjzfmufuatokglemksmedmzpyghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088589.3706844-1287-128441354822573/AnsiballZ_copy.py'
Jan 22 13:29:50 compute-1 sudo[67096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:50 compute-1 python3.9[67098]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088589.3706844-1287-128441354822573/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:50 compute-1 sudo[67096]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:51 compute-1 sudo[67248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiuwyvjbxybtjcwafddmusypngkyikxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088590.8729656-1332-57188993404644/AnsiballZ_file.py'
Jan 22 13:29:51 compute-1 sudo[67248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:51 compute-1 python3.9[67250]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:51 compute-1 sudo[67248]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:51 compute-1 sudo[67400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrtpomlaceofenqtxlukgwhyqifusben ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088591.576615-1356-82839247472709/AnsiballZ_command.py'
Jan 22 13:29:51 compute-1 sudo[67400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:52 compute-1 python3.9[67402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:52 compute-1 sudo[67400]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:52 compute-1 sudo[67559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgsnrfzreldofoosppiofuxsezcdagrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088592.376675-1380-143924295143304/AnsiballZ_blockinfile.py'
Jan 22 13:29:52 compute-1 sudo[67559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:53 compute-1 python3.9[67561]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:53 compute-1 sudo[67559]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:53 compute-1 sudo[67712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxvqxuptykqymkryiundtengnlujrvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088593.408362-1407-153934770991494/AnsiballZ_file.py'
Jan 22 13:29:53 compute-1 sudo[67712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:53 compute-1 python3.9[67714]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:53 compute-1 sudo[67712]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:54 compute-1 sudo[67864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taxinxzgttlvmsjkdflibpfnjkgsztyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088594.0897646-1407-65472715553966/AnsiballZ_file.py'
Jan 22 13:29:54 compute-1 sudo[67864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:54 compute-1 python3.9[67866]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:54 compute-1 sudo[67864]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:55 compute-1 sudo[68016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxuozoxawoffplnbxmngevsrebyxznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088594.9750254-1452-209963945616980/AnsiballZ_mount.py'
Jan 22 13:29:55 compute-1 sudo[68016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:55 compute-1 python3.9[68018]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:29:55 compute-1 sudo[68016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:56 compute-1 sudo[68169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphhfldsngbzvvgbrtvcyyradtjtselh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088596.0320177-1452-268598741019164/AnsiballZ_mount.py'
Jan 22 13:29:56 compute-1 sudo[68169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:56 compute-1 python3.9[68171]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:29:56 compute-1 sudo[68169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:57 compute-1 sshd-session[58966]: Connection closed by 192.168.122.30 port 52348
Jan 22 13:29:57 compute-1 sshd-session[58963]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:29:57 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 13:29:57 compute-1 systemd[1]: session-15.scope: Consumed 37.796s CPU time.
Jan 22 13:29:57 compute-1 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Jan 22 13:29:57 compute-1 systemd-logind[787]: Removed session 15.
Jan 22 13:30:03 compute-1 sshd-session[68197]: Accepted publickey for zuul from 192.168.122.30 port 51264 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:03 compute-1 systemd-logind[787]: New session 16 of user zuul.
Jan 22 13:30:03 compute-1 systemd[1]: Started Session 16 of User zuul.
Jan 22 13:30:03 compute-1 sshd-session[68197]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:04 compute-1 sudo[68350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtmovteavkatzfrvcxducolundwcpeuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088604.057939-24-251282626645224/AnsiballZ_tempfile.py'
Jan 22 13:30:04 compute-1 sudo[68350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:05 compute-1 python3.9[68352]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 13:30:05 compute-1 sudo[68350]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:06 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 13:30:06 compute-1 sudo[68504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smjiqtbxlkfnlxoqzrxmyckyldrzcjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088606.0130827-60-275177011254489/AnsiballZ_stat.py'
Jan 22 13:30:06 compute-1 sudo[68504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:06 compute-1 python3.9[68506]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:06 compute-1 sudo[68504]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:07 compute-1 sudo[68656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdcqejxpytwunvmmvylarnqhozoajsgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088607.1125364-90-6938174878964/AnsiballZ_setup.py'
Jan 22 13:30:07 compute-1 sudo[68656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:08 compute-1 python3.9[68658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:08 compute-1 sudo[68656]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:08 compute-1 sudo[68808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dexalpdbtzczovoijcenscvaveegmjgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088608.5073574-115-39316235795105/AnsiballZ_blockinfile.py'
Jan 22 13:30:08 compute-1 sudo[68808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:09 compute-1 python3.9[68810]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=
                                             create=True mode=0644 path=/tmp/ansible.w0oyrl2_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:09 compute-1 sudo[68808]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:09 compute-1 sudo[68960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acbdfhqygmbgqottqzhrxzwribbajmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088609.366523-139-66017567929976/AnsiballZ_command.py'
Jan 22 13:30:09 compute-1 sudo[68960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:10 compute-1 python3.9[68962]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.w0oyrl2_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:10 compute-1 sudo[68960]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:10 compute-1 sudo[69114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtkbxphmepbzsihdtdxwiytnwryzwjmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088610.2520056-163-259949376959135/AnsiballZ_file.py'
Jan 22 13:30:10 compute-1 sudo[69114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:10 compute-1 python3.9[69116]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.w0oyrl2_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:10 compute-1 sudo[69114]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:11 compute-1 sshd-session[68200]: Connection closed by 192.168.122.30 port 51264
Jan 22 13:30:11 compute-1 sshd-session[68197]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:11 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 13:30:11 compute-1 systemd[1]: session-16.scope: Consumed 3.545s CPU time.
Jan 22 13:30:11 compute-1 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Jan 22 13:30:11 compute-1 systemd-logind[787]: Removed session 16.
Jan 22 13:30:17 compute-1 sshd-session[69141]: Accepted publickey for zuul from 192.168.122.30 port 51814 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:17 compute-1 systemd-logind[787]: New session 17 of user zuul.
Jan 22 13:30:17 compute-1 systemd[1]: Started Session 17 of User zuul.
Jan 22 13:30:17 compute-1 sshd-session[69141]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:18 compute-1 python3.9[69294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:19 compute-1 sudo[69448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wayqcxamzmaymztgwptwxrejbmvnvevl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088618.8004248-57-25913021209415/AnsiballZ_systemd.py'
Jan 22 13:30:19 compute-1 sudo[69448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:19 compute-1 python3.9[69450]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:30:20 compute-1 sudo[69448]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:20 compute-1 sudo[69602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjkqiwauqlhaaltwswwmenzuoronvwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088620.2503185-81-106652758776089/AnsiballZ_systemd.py'
Jan 22 13:30:20 compute-1 sudo[69602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:20 compute-1 python3.9[69604]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:30:20 compute-1 sudo[69602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:21 compute-1 sudo[69755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bezdoesuycyxszsflrfwrubsrlvovugl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088621.2143462-108-215688211770039/AnsiballZ_command.py'
Jan 22 13:30:21 compute-1 sudo[69755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:21 compute-1 python3.9[69757]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:21 compute-1 sudo[69755]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:22 compute-1 sudo[69908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptolwhkpayhcwioyhiodjghallohxndg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088622.096821-132-72129109212569/AnsiballZ_stat.py'
Jan 22 13:30:22 compute-1 sudo[69908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:22 compute-1 python3.9[69910]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:22 compute-1 sudo[69908]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:23 compute-1 sudo[70062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epmjoxvrszqrpsqgemiyotkimlqpcwyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088622.9908316-156-25448015262017/AnsiballZ_command.py'
Jan 22 13:30:23 compute-1 sudo[70062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:23 compute-1 python3.9[70064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:23 compute-1 sudo[70062]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:24 compute-1 sudo[70217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzygxwoylcxmhycxpmtxzpjhxfmbdkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088623.751686-180-197889458360982/AnsiballZ_file.py'
Jan 22 13:30:24 compute-1 sudo[70217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:24 compute-1 python3.9[70219]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:24 compute-1 sudo[70217]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:24 compute-1 sshd-session[69144]: Connection closed by 192.168.122.30 port 51814
Jan 22 13:30:24 compute-1 sshd-session[69141]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:24 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 13:30:24 compute-1 systemd[1]: session-17.scope: Consumed 4.874s CPU time.
Jan 22 13:30:24 compute-1 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Jan 22 13:30:24 compute-1 systemd-logind[787]: Removed session 17.
Jan 22 13:30:30 compute-1 sshd-session[70245]: Accepted publickey for zuul from 192.168.122.30 port 53792 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:30 compute-1 systemd-logind[787]: New session 18 of user zuul.
Jan 22 13:30:30 compute-1 systemd[1]: Started Session 18 of User zuul.
Jan 22 13:30:30 compute-1 sshd-session[70245]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:31 compute-1 python3.9[70398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:32 compute-1 sudo[70552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqayocrxnfheiikzgbqsrlpnjijbhzyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088632.05087-63-89886018727896/AnsiballZ_setup.py'
Jan 22 13:30:32 compute-1 sudo[70552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:32 compute-1 python3.9[70554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:30:32 compute-1 sudo[70552]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:33 compute-1 sudo[70636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siwkbdlinqypgebyyvyxymjgjcvgeumh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088632.05087-63-89886018727896/AnsiballZ_dnf.py'
Jan 22 13:30:33 compute-1 sudo[70636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:33 compute-1 python3.9[70638]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:30:35 compute-1 sudo[70636]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:36 compute-1 python3.9[70789]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:37 compute-1 python3.9[70940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:30:38 compute-1 python3.9[71090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:38 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:30:39 compute-1 python3.9[71241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:39 compute-1 sshd-session[70248]: Connection closed by 192.168.122.30 port 53792
Jan 22 13:30:39 compute-1 sshd-session[70245]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:40 compute-1 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Jan 22 13:30:40 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 13:30:40 compute-1 systemd[1]: session-18.scope: Consumed 6.420s CPU time.
Jan 22 13:30:40 compute-1 systemd-logind[787]: Removed session 18.
Jan 22 13:30:44 compute-1 chronyd[58482]: Selected source 23.159.16.194 (pool.ntp.org)
Jan 22 13:30:48 compute-1 sshd-session[71266]: Accepted publickey for zuul from 38.102.83.41 port 52510 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:30:48 compute-1 systemd-logind[787]: New session 19 of user zuul.
Jan 22 13:30:48 compute-1 systemd[1]: Started Session 19 of User zuul.
Jan 22 13:30:48 compute-1 sshd-session[71266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:48 compute-1 sudo[71342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrdvqbtrurbylppofskxfsgftxejvjr ; /usr/bin/python3'
Jan 22 13:30:48 compute-1 sudo[71342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:48 compute-1 useradd[71346]: new group: name=ceph-admin, GID=42478
Jan 22 13:30:48 compute-1 useradd[71346]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 22 13:30:48 compute-1 sudo[71342]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:50 compute-1 sudo[71428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dohpxiabuqeiiafunwpgluhefnfjoyex ; /usr/bin/python3'
Jan 22 13:30:50 compute-1 sudo[71428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:50 compute-1 sudo[71428]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:50 compute-1 sudo[71501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crupkadbvaibsafgzjgolkqcxdxtmgic ; /usr/bin/python3'
Jan 22 13:30:50 compute-1 sudo[71501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:50 compute-1 sudo[71501]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:51 compute-1 sudo[71551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnscnbzgtatfqvfrbqknqykbahhtpeqy ; /usr/bin/python3'
Jan 22 13:30:51 compute-1 sudo[71551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:51 compute-1 sudo[71551]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:51 compute-1 sudo[71577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirmdeqvjwfsriducuakaydilsqynhoc ; /usr/bin/python3'
Jan 22 13:30:51 compute-1 sudo[71577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:51 compute-1 sudo[71577]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:52 compute-1 sudo[71603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smjojufjhibexsfuuujekumokaoogrxg ; /usr/bin/python3'
Jan 22 13:30:52 compute-1 sudo[71603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:52 compute-1 sudo[71603]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:52 compute-1 sudo[71629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uskvpdfolpvwlxtjyzshicmuajgulgvo ; /usr/bin/python3'
Jan 22 13:30:52 compute-1 sudo[71629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:52 compute-1 sudo[71629]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:53 compute-1 sudo[71707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnccvanvkijijpgkkbuiqisknikbigj ; /usr/bin/python3'
Jan 22 13:30:53 compute-1 sudo[71707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:53 compute-1 sudo[71707]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:53 compute-1 sudo[71780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhednvewezkwtzmzbnvkznoszffdhwjh ; /usr/bin/python3'
Jan 22 13:30:53 compute-1 sudo[71780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:53 compute-1 sudo[71780]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:54 compute-1 sudo[71882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtydeaacplkurrsklnpslwdoeywjhsyq ; /usr/bin/python3'
Jan 22 13:30:54 compute-1 sudo[71882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:54 compute-1 sudo[71882]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:54 compute-1 sudo[71955]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmzydatueyfjraoaklylnxtosnldclyp ; /usr/bin/python3'
Jan 22 13:30:54 compute-1 sudo[71955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:54 compute-1 sudo[71955]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:55 compute-1 sudo[72005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgemdbcgbqcgvmtehrnfemvhwraivbb ; /usr/bin/python3'
Jan 22 13:30:55 compute-1 sudo[72005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:55 compute-1 python3[72007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:56 compute-1 sudo[72005]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:57 compute-1 sudo[72101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beczmdnznkdreuqokyflzftvwbnqjqle ; /usr/bin/python3'
Jan 22 13:30:57 compute-1 sudo[72101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:57 compute-1 python3[72103]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 13:30:58 compute-1 sudo[72101]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:58 compute-1 sudo[72128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqejkwgypauswsodmtvljhjllowfnmkj ; /usr/bin/python3'
Jan 22 13:30:58 compute-1 sudo[72128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:59 compute-1 python3[72130]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 13:30:59 compute-1 sudo[72128]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:59 compute-1 sudo[72154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-augwikabrnmfkpekbmtreqgzxoggpcwy ; /usr/bin/python3'
Jan 22 13:30:59 compute-1 sudo[72154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:59 compute-1 python3[72156]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:59 compute-1 kernel: loop: module loaded
Jan 22 13:30:59 compute-1 kernel: loop3: detected capacity change from 0 to 14680064
Jan 22 13:30:59 compute-1 sudo[72154]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:59 compute-1 sudo[72189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgxalziubbubwxcjxkddnsjmkmkgsks ; /usr/bin/python3'
Jan 22 13:30:59 compute-1 sudo[72189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:00 compute-1 python3[72191]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:31:00 compute-1 lvm[72194]: PV /dev/loop3 not used.
Jan 22 13:31:00 compute-1 lvm[72196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:00 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 13:31:00 compute-1 lvm[72206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:00 compute-1 lvm[72206]: VG ceph_vg0 finished
Jan 22 13:31:00 compute-1 lvm[72203]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 13:31:00 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 13:31:00 compute-1 sudo[72189]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:00 compute-1 sudo[72282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkqbaqqvmpnfdkfpmlcxwsiivryfsxge ; /usr/bin/python3'
Jan 22 13:31:00 compute-1 sudo[72282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:00 compute-1 python3[72284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:31:00 compute-1 sudo[72282]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:01 compute-1 sudo[72355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irydtnpcworjcngsybfgoazoimumaslp ; /usr/bin/python3'
Jan 22 13:31:01 compute-1 sudo[72355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:01 compute-1 python3[72357]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088660.569294-37030-243449598562900/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:31:01 compute-1 sudo[72355]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:01 compute-1 sudo[72405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueqtnkyoooajhpyfhrybuykhfmmrftqd ; /usr/bin/python3'
Jan 22 13:31:01 compute-1 sudo[72405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:02 compute-1 python3[72407]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:31:02 compute-1 systemd[1]: Reloading.
Jan 22 13:31:02 compute-1 systemd-rc-local-generator[72437]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:31:02 compute-1 systemd-sysv-generator[72440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:31:02 compute-1 systemd[1]: Starting Ceph OSD losetup...
Jan 22 13:31:02 compute-1 bash[72447]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 22 13:31:02 compute-1 systemd[1]: Finished Ceph OSD losetup.
Jan 22 13:31:02 compute-1 sudo[72405]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:02 compute-1 lvm[72449]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:02 compute-1 lvm[72449]: VG ceph_vg0 finished
Jan 22 13:31:04 compute-1 python3[72473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:33:27 compute-1 sshd-session[72517]: Accepted publickey for ceph-admin from 192.168.122.100 port 33436 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:27 compute-1 systemd-logind[787]: New session 20 of user ceph-admin.
Jan 22 13:33:27 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 13:33:27 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 13:33:27 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 13:33:27 compute-1 systemd[1]: Starting User Manager for UID 42477...
Jan 22 13:33:27 compute-1 systemd[72521]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:27 compute-1 systemd[72521]: Queued start job for default target Main User Target.
Jan 22 13:33:27 compute-1 systemd[72521]: Created slice User Application Slice.
Jan 22 13:33:27 compute-1 systemd[72521]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 13:33:27 compute-1 systemd[72521]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 13:33:27 compute-1 systemd[72521]: Reached target Paths.
Jan 22 13:33:27 compute-1 systemd[72521]: Reached target Timers.
Jan 22 13:33:27 compute-1 systemd[72521]: Starting D-Bus User Message Bus Socket...
Jan 22 13:33:27 compute-1 systemd[72521]: Starting Create User's Volatile Files and Directories...
Jan 22 13:33:27 compute-1 sshd-session[72535]: Accepted publickey for ceph-admin from 192.168.122.100 port 33450 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:27 compute-1 systemd[72521]: Finished Create User's Volatile Files and Directories.
Jan 22 13:33:27 compute-1 systemd[72521]: Listening on D-Bus User Message Bus Socket.
Jan 22 13:33:27 compute-1 systemd[72521]: Reached target Sockets.
Jan 22 13:33:27 compute-1 systemd[72521]: Reached target Basic System.
Jan 22 13:33:27 compute-1 systemd[72521]: Reached target Main User Target.
Jan 22 13:33:27 compute-1 systemd[72521]: Startup finished in 124ms.
Jan 22 13:33:27 compute-1 systemd[1]: Started User Manager for UID 42477.
Jan 22 13:33:27 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Jan 22 13:33:27 compute-1 sshd-session[72517]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:27 compute-1 systemd-logind[787]: New session 22 of user ceph-admin.
Jan 22 13:33:27 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Jan 22 13:33:27 compute-1 sshd-session[72535]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:27 compute-1 sudo[72542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:27 compute-1 sudo[72542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:27 compute-1 sudo[72542]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:28 compute-1 sudo[72567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:28 compute-1 sudo[72567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:28 compute-1 sudo[72567]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:28 compute-1 sshd-session[72592]: Accepted publickey for ceph-admin from 192.168.122.100 port 53160 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:28 compute-1 systemd-logind[787]: New session 23 of user ceph-admin.
Jan 22 13:33:28 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 13:33:28 compute-1 sshd-session[72592]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:28 compute-1 sudo[72596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:28 compute-1 sudo[72596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:28 compute-1 sudo[72596]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:28 compute-1 sudo[72621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 22 13:33:28 compute-1 sudo[72621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:28 compute-1 sudo[72621]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:28 compute-1 sshd-session[72646]: Accepted publickey for ceph-admin from 192.168.122.100 port 53174 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:28 compute-1 systemd-logind[787]: New session 24 of user ceph-admin.
Jan 22 13:33:28 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 13:33:28 compute-1 sshd-session[72646]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:28 compute-1 sudo[72650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:28 compute-1 sudo[72650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:28 compute-1 sudo[72650]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:28 compute-1 sudo[72675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 22 13:33:28 compute-1 sudo[72675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:28 compute-1 sudo[72675]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:29 compute-1 sshd-session[72700]: Accepted publickey for ceph-admin from 192.168.122.100 port 53180 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:29 compute-1 systemd-logind[787]: New session 25 of user ceph-admin.
Jan 22 13:33:29 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 13:33:29 compute-1 sshd-session[72700]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:29 compute-1 sudo[72704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:29 compute-1 sudo[72704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:29 compute-1 sudo[72704]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:29 compute-1 sudo[72729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:29 compute-1 sudo[72729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:29 compute-1 sudo[72729]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:29 compute-1 sshd-session[72754]: Accepted publickey for ceph-admin from 192.168.122.100 port 53182 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:29 compute-1 systemd-logind[787]: New session 26 of user ceph-admin.
Jan 22 13:33:29 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 13:33:29 compute-1 sshd-session[72754]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:29 compute-1 sudo[72758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:29 compute-1 sudo[72758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:29 compute-1 sudo[72758]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:29 compute-1 sudo[72783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:29 compute-1 sudo[72783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:29 compute-1 sudo[72783]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:29 compute-1 sshd-session[72808]: Accepted publickey for ceph-admin from 192.168.122.100 port 53184 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:29 compute-1 systemd-logind[787]: New session 27 of user ceph-admin.
Jan 22 13:33:29 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 13:33:29 compute-1 sshd-session[72808]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:30 compute-1 sudo[72812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:30 compute-1 sudo[72812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:30 compute-1 sudo[72812]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:30 compute-1 sudo[72837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 22 13:33:30 compute-1 sudo[72837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:30 compute-1 sudo[72837]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:30 compute-1 sshd-session[72862]: Accepted publickey for ceph-admin from 192.168.122.100 port 53188 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:30 compute-1 systemd-logind[787]: New session 28 of user ceph-admin.
Jan 22 13:33:30 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 13:33:30 compute-1 sshd-session[72862]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:30 compute-1 sudo[72866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:30 compute-1 sudo[72866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:30 compute-1 sudo[72866]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:30 compute-1 sudo[72891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:30 compute-1 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:30 compute-1 sudo[72891]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:30 compute-1 sshd-session[72916]: Accepted publickey for ceph-admin from 192.168.122.100 port 53192 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:30 compute-1 systemd-logind[787]: New session 29 of user ceph-admin.
Jan 22 13:33:30 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 13:33:30 compute-1 sshd-session[72916]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:31 compute-1 sudo[72920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:31 compute-1 sudo[72920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:31 compute-1 sudo[72920]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:31 compute-1 sudo[72945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 22 13:33:31 compute-1 sudo[72945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:31 compute-1 sudo[72945]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:31 compute-1 sshd-session[72970]: Accepted publickey for ceph-admin from 192.168.122.100 port 53200 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:31 compute-1 systemd-logind[787]: New session 30 of user ceph-admin.
Jan 22 13:33:31 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 13:33:31 compute-1 sshd-session[72970]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:31 compute-1 sshd-session[72997]: Accepted publickey for ceph-admin from 192.168.122.100 port 53210 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:31 compute-1 systemd-logind[787]: New session 31 of user ceph-admin.
Jan 22 13:33:31 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 13:33:31 compute-1 sshd-session[72997]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:32 compute-1 sudo[73001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:32 compute-1 sudo[73001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 sudo[73001]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:32 compute-1 sudo[73026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 22 13:33:32 compute-1 sudo[73026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 sudo[73026]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:32 compute-1 sshd-session[73051]: Accepted publickey for ceph-admin from 192.168.122.100 port 53222 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:32 compute-1 systemd-logind[787]: New session 32 of user ceph-admin.
Jan 22 13:33:32 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 13:33:32 compute-1 sshd-session[73051]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:32 compute-1 sudo[73055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:32 compute-1 sudo[73055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 sudo[73055]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:32 compute-1 sudo[73080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 22 13:33:32 compute-1 sudo[73080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:32 compute-1 sudo[73080]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:32 compute-1 sudo[73126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:32 compute-1 sudo[73126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 sudo[73126]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:32 compute-1 sudo[73151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:32 compute-1 sudo[73151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:32 compute-1 sudo[73151]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:33 compute-1 sudo[73176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 sudo[73176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:33:33 compute-1 sudo[73201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:33 compute-1 sudo[73201]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:33 compute-1 sudo[73246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 sudo[73246]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:33 compute-1 sudo[73271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 sudo[73271]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:33 compute-1 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 sudo[73296]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-1 sudo[73321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:33:33 compute-1 sudo[73321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:33 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:33 compute-1 sudo[73321]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-1 sudo[73381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:34 compute-1 sudo[73381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-1 sudo[73381]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-1 sudo[73406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:34 compute-1 sudo[73406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-1 sudo[73406]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-1 sudo[73431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:34 compute-1 sudo[73431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-1 sudo[73431]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-1 sudo[73456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:33:34 compute-1 sudo[73456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73493 (sysctl)
Jan 22 13:33:34 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:34 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 13:33:34 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 13:33:35 compute-1 sudo[73456]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-1 sudo[73515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73515]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:35 compute-1 sudo[73540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-1 sudo[73565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73565]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 13:33:35 compute-1 sudo[73590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:35 compute-1 sudo[73590]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-1 sudo[73632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73632]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:35 compute-1 sudo[73657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73657]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-1 sudo[73682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-1 sudo[73682]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-1 sudo[73707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 13:33:35 compute-1 sudo[73707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:36 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:36 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat827076912-merged.mount: Deactivated successfully.
Jan 22 13:33:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat827076912-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.127069829 +0000 UTC m=+22.895917197 container create d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.086232759 +0000 UTC m=+22.855080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:33:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck537121439-merged.mount: Deactivated successfully.
Jan 22 13:33:59 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 13:33:59 compute-1 systemd[1]: Started libpod-conmon-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope.
Jan 22 13:33:59 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.420002464 +0000 UTC m=+23.188849862 container init d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.429513355 +0000 UTC m=+23.198360723 container start d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 13:33:59 compute-1 peaceful_euclid[73830]: 167 167
Jan 22 13:33:59 compute-1 systemd[1]: libpod-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope: Deactivated successfully.
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.461560128 +0000 UTC m=+23.230407496 container attach d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.46239247 +0000 UTC m=+23.231239838 container died d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 13:33:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-25c20e2f4a20ebb79d6474409308ed6b3b66bf56e2d223ddb697681e8577d2bd-merged.mount: Deactivated successfully.
Jan 22 13:33:59 compute-1 podman[73768]: 2026-01-22 13:33:59.526993749 +0000 UTC m=+23.295841127 container remove d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:33:59 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:59 compute-1 systemd[1]: libpod-conmon-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope: Deactivated successfully.
Jan 22 13:33:59 compute-1 podman[73856]: 2026-01-22 13:33:59.710608563 +0000 UTC m=+0.048500905 container create 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 13:33:59 compute-1 systemd[1]: Started libpod-conmon-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope.
Jan 22 13:33:59 compute-1 podman[73856]: 2026-01-22 13:33:59.689105261 +0000 UTC m=+0.026997623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:33:59 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:33:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:33:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:33:59 compute-1 podman[73856]: 2026-01-22 13:33:59.829998399 +0000 UTC m=+0.167890751 container init 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:33:59 compute-1 podman[73856]: 2026-01-22 13:33:59.838344709 +0000 UTC m=+0.176237041 container start 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:33:59 compute-1 podman[73856]: 2026-01-22 13:33:59.858116393 +0000 UTC m=+0.196008755 container attach 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]: [
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:     {
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "available": false,
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "ceph_device": false,
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "lsm_data": {},
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "lvs": [],
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "path": "/dev/sr0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "rejected_reasons": [
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "Has a FileSystem",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "Insufficient space (<5GB)"
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         ],
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         "sys_api": {
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "actuators": null,
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "device_nodes": "sr0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "devname": "sr0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "human_readable_size": "482.00 KB",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "id_bus": "ata",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "model": "QEMU DVD-ROM",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "nr_requests": "2",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "parent": "/dev/sr0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "partitions": {},
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "path": "/dev/sr0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "removable": "1",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "rev": "2.5+",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "ro": "0",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "rotational": "1",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "sas_address": "",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "sas_device_handle": "",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "scheduler_mode": "mq-deadline",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "sectors": 0,
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "sectorsize": "2048",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "size": 493568.0,
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "support_discard": "2048",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "type": "disk",
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:             "vendor": "QEMU"
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:         }
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]:     }
Jan 22 13:34:01 compute-1 blissful_nightingale[73872]: ]
Jan 22 13:34:01 compute-1 systemd[1]: libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Deactivated successfully.
Jan 22 13:34:01 compute-1 systemd[1]: libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Consumed 1.264s CPU time.
Jan 22 13:34:01 compute-1 conmon[73872]: conmon 7b9336ec8aee2184619f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope/container/memory.events
Jan 22 13:34:01 compute-1 podman[73856]: 2026-01-22 13:34:01.105006876 +0000 UTC m=+1.442899208 container died 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174-merged.mount: Deactivated successfully.
Jan 22 13:34:07 compute-1 podman[73856]: 2026-01-22 13:34:07.475679698 +0000 UTC m=+7.813572030 container remove 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:07 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:07 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:07 compute-1 systemd[1]: libpod-conmon-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Deactivated successfully.
Jan 22 13:34:07 compute-1 sudo[73707]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[74952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:07 compute-1 sudo[74952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[74952]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[74977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:34:07 compute-1 sudo[74977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[74977]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[75002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:07 compute-1 sudo[75002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[75002]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[75027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:34:07 compute-1 sudo[75027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[75027]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[75052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:07 compute-1 sudo[75052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[75052]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:07 compute-1 sudo[75077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:34:07 compute-1 sudo[75077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:07 compute-1 sudo[75077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75102]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:08 compute-1 sudo[75127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75127]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75152]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:34:08 compute-1 sudo[75177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75177]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75225]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:34:08 compute-1 sudo[75250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75250]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75275]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:34:08 compute-1 sudo[75300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75300]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75325]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 22 13:34:08 compute-1 sudo[75350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75350]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75375]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:34:08 compute-1 sudo[75400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75400]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:08 compute-1 sudo[75425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:08 compute-1 sudo[75450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:34:08 compute-1 sudo[75450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:08 compute-1 sudo[75450]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75475]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:34:09 compute-1 sudo[75500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75500]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75525]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:09 compute-1 sudo[75550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75550]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75575]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:34:09 compute-1 sudo[75600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75600]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75648]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:34:09 compute-1 sudo[75673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75673]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75698]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:34:09 compute-1 sudo[75723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75723]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75748]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:34:09 compute-1 sudo[75773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75773]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:09 compute-1 sudo[75798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:09 compute-1 sudo[75823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:34:09 compute-1 sudo[75823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:09 compute-1 sudo[75823]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[75848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75848]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:34:10 compute-1 sudo[75873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75873]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[75898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75898]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:34:10 compute-1 sudo[75923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75923]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[75948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75948]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:10 compute-1 sudo[75973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75973]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[75998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[75998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[75998]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:34:10 compute-1 sudo[76023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76023]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[76071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76071]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:34:10 compute-1 sudo[76096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76096]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[76121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76121]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:34:10 compute-1 sudo[76146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76146]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:10 compute-1 sudo[76171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:10 compute-1 sudo[76171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:10 compute-1 sudo[76171]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 22 13:34:11 compute-1 sudo[76196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76196]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76221]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:34:11 compute-1 sudo[76246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76246]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76271]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:34:11 compute-1 sudo[76296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76296]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76321]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:34:11 compute-1 sudo[76346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76346]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76371]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:11 compute-1 sudo[76396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76396]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76421]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:34:11 compute-1 sudo[76447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76447]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:34:11 compute-1 sudo[76520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76520]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:11 compute-1 sudo[76545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76545]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:11 compute-1 sudo[76570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:34:11 compute-1 sudo[76570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:11 compute-1 sudo[76570]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:12 compute-1 sudo[76595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 sudo[76595]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 13:34:12 compute-1 sudo[76620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 sudo[76620]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:12 compute-1 sudo[76645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 sudo[76645]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:12 compute-1 sudo[76670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 sudo[76670]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:12 compute-1 sudo[76695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 sudo[76695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:12 compute-1 sudo[76720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:12 compute-1 sudo[76720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:12 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:12 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:12 compute-1 podman[76786]: 2026-01-22 13:34:12.837393936 +0000 UTC m=+0.022030778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.753897493 +0000 UTC m=+0.938534335 container create ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 13:34:13 compute-1 systemd[1]: Started libpod-conmon-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope.
Jan 22 13:34:13 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.847191901 +0000 UTC m=+1.031828733 container init ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.856111286 +0000 UTC m=+1.040748098 container start ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.859965253 +0000 UTC m=+1.044602085 container attach ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:13 compute-1 brave_ishizaka[76802]: 167 167
Jan 22 13:34:13 compute-1 systemd[1]: libpod-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope: Deactivated successfully.
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.864221409 +0000 UTC m=+1.048858221 container died ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-eb0427dc426b7c1d1c34bb6530721b1c306ad9cf8bc78f6cb375c86ac002f5b6-merged.mount: Deactivated successfully.
Jan 22 13:34:13 compute-1 podman[76786]: 2026-01-22 13:34:13.90491523 +0000 UTC m=+1.089552042 container remove ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:34:13 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:13 compute-1 systemd[1]: libpod-conmon-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope: Deactivated successfully.
Jan 22 13:34:13 compute-1 systemd[1]: Reloading.
Jan 22 13:34:14 compute-1 systemd-rc-local-generator[76849]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:14 compute-1 systemd-sysv-generator[76853]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:14 compute-1 systemd[1]: Reloading.
Jan 22 13:34:14 compute-1 systemd-sysv-generator[76889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:14 compute-1 systemd-rc-local-generator[76886]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:14 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 13:34:14 compute-1 systemd[1]: Reloading.
Jan 22 13:34:14 compute-1 systemd-rc-local-generator[76922]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:14 compute-1 systemd-sysv-generator[76927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:14 compute-1 systemd[1]: Reached target Ceph cluster 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:34:15 compute-1 systemd[1]: Reloading.
Jan 22 13:34:15 compute-1 systemd-sysv-generator[76960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:15 compute-1 systemd-rc-local-generator[76955]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:15 compute-1 systemd[1]: Reloading.
Jan 22 13:34:15 compute-1 systemd-rc-local-generator[77003]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:15 compute-1 systemd-sysv-generator[77007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:16 compute-1 systemd[1]: Created slice Slice /system/ceph-088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:34:16 compute-1 systemd[1]: Reached target System Time Set.
Jan 22 13:34:16 compute-1 systemd[1]: Reached target System Time Synchronized.
Jan 22 13:34:16 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:34:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:16 compute-1 podman[77059]: 2026-01-22 13:34:16.513621768 +0000 UTC m=+0.079716536 container create 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 13:34:16 compute-1 podman[77059]: 2026-01-22 13:34:16.456426854 +0000 UTC m=+0.022521632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:16 compute-1 podman[77059]: 2026-01-22 13:34:16.628197752 +0000 UTC m=+0.194292530 container init 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 13:34:16 compute-1 podman[77059]: 2026-01-22 13:34:16.63614004 +0000 UTC m=+0.202234798 container start 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:34:16 compute-1 bash[77059]: 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891
Jan 22 13:34:16 compute-1 systemd[1]: Started Ceph crash.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:34:16 compute-1 sudo[76720]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:16 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 13:34:16 compute-1 sudo[77079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:16 compute-1 sudo[77079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:16 compute-1 sudo[77079]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:16 compute-1 sudo[77106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:16 compute-1 sudo[77106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:16 compute-1 sudo[77106]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:16 compute-1 sudo[77131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:16 compute-1 sudo[77131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:16 compute-1 sudo[77131]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.114+0000 7f9412032640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.114+0000 7f9412032640 -1 AuthRegistry(0x7f940c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.116+0000 7f9412032640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.116+0000 7f9412032640 -1 AuthRegistry(0x7f9412031000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.119+0000 7f940b7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.119+0000 7f9412032640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 13:34:17 compute-1 sudo[77156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 13:34:17 compute-1 sudo[77156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:17 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.517820979 +0000 UTC m=+0.048594778 container create f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.495127034 +0000 UTC m=+0.025900853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:17 compute-1 systemd[1]: Started libpod-conmon-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope.
Jan 22 13:34:17 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.649262687 +0000 UTC m=+0.180036506 container init f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.658021948 +0000 UTC m=+0.188795747 container start f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.66133341 +0000 UTC m=+0.192107229 container attach f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 13:34:17 compute-1 lucid_mendeleev[77247]: 167 167
Jan 22 13:34:17 compute-1 systemd[1]: libpod-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope: Deactivated successfully.
Jan 22 13:34:17 compute-1 conmon[77247]: conmon f54dc757e4f669d313db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope/container/memory.events
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.666388388 +0000 UTC m=+0.197162187 container died f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 13:34:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-6a2ad9523f7099c64b2e8593c7b7b78e6e56dac441f4159f535cc1a1fd4c17e4-merged.mount: Deactivated successfully.
Jan 22 13:34:17 compute-1 podman[77230]: 2026-01-22 13:34:17.701906867 +0000 UTC m=+0.232680676 container remove f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 13:34:17 compute-1 systemd[1]: libpod-conmon-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope: Deactivated successfully.
Jan 22 13:34:17 compute-1 podman[77270]: 2026-01-22 13:34:17.878990791 +0000 UTC m=+0.048019403 container create 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 13:34:17 compute-1 systemd[1]: Started libpod-conmon-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope.
Jan 22 13:34:17 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:17 compute-1 podman[77270]: 2026-01-22 13:34:17.857543301 +0000 UTC m=+0.026571923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:17 compute-1 podman[77270]: 2026-01-22 13:34:17.964012002 +0000 UTC m=+0.133040624 container init 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:17 compute-1 podman[77270]: 2026-01-22 13:34:17.972380122 +0000 UTC m=+0.141408714 container start 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 13:34:17 compute-1 podman[77270]: 2026-01-22 13:34:17.976001702 +0000 UTC m=+0.145030324 container attach 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:18 compute-1 festive_hertz[77286]: --> passed data devices: 0 physical, 1 LVM
Jan 22 13:34:18 compute-1 festive_hertz[77286]: --> relative data size: 1.0
Jan 22 13:34:18 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 13:34:18 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 729e7fcc-4be0-4e65-a251-dac5739e2fea
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 22 13:34:19 compute-1 lvm[77334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:34:19 compute-1 lvm[77334]: VG ceph_vg0 finished
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 22 13:34:19 compute-1 festive_hertz[77286]:  stderr: got monmap epoch 1
Jan 22 13:34:19 compute-1 festive_hertz[77286]: --> Creating keyring file for osd.1
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 22 13:34:19 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 729e7fcc-4be0-4e65-a251-dac5739e2fea --setuser ceph --setgroup ceph
Jan 22 13:34:23 compute-1 festive_hertz[77286]:  stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:34:23 compute-1 festive_hertz[77286]:  stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:34:23 compute-1 festive_hertz[77286]:  stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:34:23 compute-1 festive_hertz[77286]:  stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 22 13:34:23 compute-1 festive_hertz[77286]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:34:23 compute-1 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:23 compute-1 festive_hertz[77286]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 22 13:34:23 compute-1 festive_hertz[77286]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 13:34:23 compute-1 systemd[1]: libpod-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Deactivated successfully.
Jan 22 13:34:23 compute-1 systemd[1]: libpod-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Consumed 2.559s CPU time.
Jan 22 13:34:23 compute-1 podman[77270]: 2026-01-22 13:34:23.670897141 +0000 UTC m=+5.839925743 container died 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 13:34:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930-merged.mount: Deactivated successfully.
Jan 22 13:34:24 compute-1 podman[77270]: 2026-01-22 13:34:24.503116118 +0000 UTC m=+6.672145000 container remove 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 13:34:24 compute-1 systemd[1]: libpod-conmon-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Deactivated successfully.
Jan 22 13:34:24 compute-1 sudo[77156]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:24 compute-1 sudo[78255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:24 compute-1 sudo[78255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:24 compute-1 sudo[78255]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:24 compute-1 sudo[78280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:24 compute-1 sudo[78280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:24 compute-1 sudo[78280]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:24 compute-1 sudo[78305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:24 compute-1 sudo[78305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:24 compute-1 sudo[78305]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:24 compute-1 sudo[78330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- lvm list --format json
Jan 22 13:34:24 compute-1 sudo[78330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:25 compute-1 podman[78393]: 2026-01-22 13:34:25.151816294 +0000 UTC m=+0.023640571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.018912762 +0000 UTC m=+0.890737019 container create dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 13:34:26 compute-1 systemd[1]: Started libpod-conmon-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope.
Jan 22 13:34:26 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.560183852 +0000 UTC m=+1.432008129 container init dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.570492986 +0000 UTC m=+1.442317263 container start dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.574916117 +0000 UTC m=+1.446740374 container attach dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 22 13:34:26 compute-1 friendly_ardinghelli[78409]: 167 167
Jan 22 13:34:26 compute-1 systemd[1]: libpod-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope: Deactivated successfully.
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.577468957 +0000 UTC m=+1.449293214 container died dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-218f37fce0ac8e1bba67247afa38d0110c9ac1ef69c00640cfa87a1dc8f0437c-merged.mount: Deactivated successfully.
Jan 22 13:34:26 compute-1 podman[78393]: 2026-01-22 13:34:26.629536031 +0000 UTC m=+1.501360288 container remove dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 13:34:26 compute-1 systemd[1]: libpod-conmon-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope: Deactivated successfully.
Jan 22 13:34:26 compute-1 podman[78431]: 2026-01-22 13:34:26.80463629 +0000 UTC m=+0.044073403 container create 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:26 compute-1 systemd[1]: Started libpod-conmon-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope.
Jan 22 13:34:26 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:26 compute-1 podman[78431]: 2026-01-22 13:34:26.785962587 +0000 UTC m=+0.025399720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:26 compute-1 podman[78431]: 2026-01-22 13:34:26.897767545 +0000 UTC m=+0.137204678 container init 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 13:34:26 compute-1 podman[78431]: 2026-01-22 13:34:26.906885615 +0000 UTC m=+0.146322728 container start 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:34:26 compute-1 podman[78431]: 2026-01-22 13:34:26.910525645 +0000 UTC m=+0.149962778 container attach 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]: {
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:     "1": [
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:         {
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "devices": [
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "/dev/loop3"
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             ],
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "lv_name": "ceph_lv0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "lv_size": "7511998464",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=729e7fcc-4be0-4e65-a251-dac5739e2fea,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "lv_uuid": "8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "name": "ceph_lv0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "tags": {
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.block_uuid": "8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.cluster_name": "ceph",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.crush_device_class": "",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.encrypted": "0",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.osd_fsid": "729e7fcc-4be0-4e65-a251-dac5739e2fea",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.osd_id": "1",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.type": "block",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:                 "ceph.vdo": "0"
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             },
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "type": "block",
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:             "vg_name": "ceph_vg0"
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:         }
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]:     ]
Jan 22 13:34:27 compute-1 suspicious_maxwell[78447]: }
Jan 22 13:34:27 compute-1 systemd[1]: libpod-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope: Deactivated successfully.
Jan 22 13:34:27 compute-1 podman[78456]: 2026-01-22 13:34:27.743601287 +0000 UTC m=+0.030508141 container died 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4-merged.mount: Deactivated successfully.
Jan 22 13:34:27 compute-1 podman[78456]: 2026-01-22 13:34:27.82618262 +0000 UTC m=+0.113089474 container remove 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:34:27 compute-1 systemd[1]: libpod-conmon-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope: Deactivated successfully.
Jan 22 13:34:27 compute-1 sudo[78330]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:27 compute-1 sudo[78471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:27 compute-1 sudo[78471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:27 compute-1 sudo[78471]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:28 compute-1 sudo[78496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:28 compute-1 sudo[78496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:28 compute-1 sudo[78496]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:28 compute-1 sudo[78521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:28 compute-1 sudo[78521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:28 compute-1 sudo[78521]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:28 compute-1 sudo[78546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:34:28 compute-1 sudo[78546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.498015533 +0000 UTC m=+0.039659962 container create 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:28 compute-1 systemd[1]: Started libpod-conmon-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope.
Jan 22 13:34:28 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.48155767 +0000 UTC m=+0.023202129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.772799076 +0000 UTC m=+0.314443535 container init 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.780854968 +0000 UTC m=+0.322499407 container start 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 13:34:28 compute-1 wizardly_sammet[78628]: 167 167
Jan 22 13:34:28 compute-1 systemd[1]: libpod-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope: Deactivated successfully.
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.809727453 +0000 UTC m=+0.351371922 container attach 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.810811864 +0000 UTC m=+0.352456303 container died 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:34:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-74ca805fe3d2873be071169c81c24b6292979b46fabff89c2ba3aedc624f3990-merged.mount: Deactivated successfully.
Jan 22 13:34:28 compute-1 podman[78611]: 2026-01-22 13:34:28.8698979 +0000 UTC m=+0.411542339 container remove 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 22 13:34:28 compute-1 systemd[1]: libpod-conmon-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope: Deactivated successfully.
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.128908839 +0000 UTC m=+0.046445329 container create 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:29 compute-1 systemd[1]: Started libpod-conmon-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope.
Jan 22 13:34:29 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.205160368 +0000 UTC m=+0.122696878 container init 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.110279357 +0000 UTC m=+0.027815867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.214971738 +0000 UTC m=+0.132508228 container start 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.219171164 +0000 UTC m=+0.136707654 container attach 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:29 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 22 13:34:29 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]:                             [--no-systemd] [--no-tmpfs]
Jan 22 13:34:29 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 13:34:29 compute-1 systemd[1]: libpod-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope: Deactivated successfully.
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.894010329 +0000 UTC m=+0.811546819 container died 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d-merged.mount: Deactivated successfully.
Jan 22 13:34:29 compute-1 podman[78662]: 2026-01-22 13:34:29.947355808 +0000 UTC m=+0.864892298 container remove 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 13:34:29 compute-1 systemd[1]: libpod-conmon-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope: Deactivated successfully.
Jan 22 13:34:30 compute-1 systemd[1]: Reloading.
Jan 22 13:34:30 compute-1 systemd-rc-local-generator[78740]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:30 compute-1 systemd-sysv-generator[78744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:30 compute-1 systemd[1]: Reloading.
Jan 22 13:34:30 compute-1 systemd-sysv-generator[78784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:34:30 compute-1 systemd-rc-local-generator[78781]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:34:30 compute-1 systemd[1]: Starting Ceph osd.1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:34:30 compute-1 podman[78839]: 2026-01-22 13:34:30.97807814 +0000 UTC m=+0.040900356 container create 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 13:34:31 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:31 compute-1 podman[78839]: 2026-01-22 13:34:30.95952207 +0000 UTC m=+0.022344306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:31 compute-1 podman[78839]: 2026-01-22 13:34:31.236004219 +0000 UTC m=+0.298826465 container init 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:31 compute-1 podman[78839]: 2026-01-22 13:34:31.243379853 +0000 UTC m=+0.306202069 container start 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 13:34:31 compute-1 podman[78839]: 2026-01-22 13:34:31.247122195 +0000 UTC m=+0.309944411 container attach 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:32 compute-1 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 13:34:32 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: --> ceph-volume raw activate successful for osd ID: 1
Jan 22 13:34:32 compute-1 bash[78839]: --> ceph-volume raw activate successful for osd ID: 1
Jan 22 13:34:32 compute-1 systemd[1]: libpod-42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03.scope: Deactivated successfully.
Jan 22 13:34:32 compute-1 systemd[1]: libpod-42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03.scope: Consumed 1.014s CPU time.
Jan 22 13:34:32 compute-1 podman[78966]: 2026-01-22 13:34:32.306894628 +0000 UTC m=+0.037848763 container died 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 13:34:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef-merged.mount: Deactivated successfully.
Jan 22 13:34:32 compute-1 podman[78966]: 2026-01-22 13:34:32.365600564 +0000 UTC m=+0.096554699 container remove 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:32 compute-1 podman[79025]: 2026-01-22 13:34:32.554954956 +0000 UTC m=+0.027912260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:32 compute-1 podman[79025]: 2026-01-22 13:34:32.745056299 +0000 UTC m=+0.218013553 container create a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:32 compute-1 podman[79025]: 2026-01-22 13:34:32.814093619 +0000 UTC m=+0.287050883 container init a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 13:34:32 compute-1 podman[79025]: 2026-01-22 13:34:32.823446556 +0000 UTC m=+0.296403810 container start a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 13:34:32 compute-1 bash[79025]: a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e
Jan 22 13:34:32 compute-1 systemd[1]: Started Ceph osd.1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:34:32 compute-1 sudo[78546]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:32 compute-1 ceph-osd[79044]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:34:32 compute-1 ceph-osd[79044]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 22 13:34:32 compute-1 ceph-osd[79044]: pidfile_write: ignore empty --pid-file
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 13:34:32 compute-1 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 13:34:32 compute-1 sudo[79057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:32 compute-1 sudo[79057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:32 compute-1 sudo[79057]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:33 compute-1 sudo[79082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:33 compute-1 sudo[79082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:33 compute-1 sudo[79082]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:33 compute-1 sudo[79107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:33 compute-1 sudo[79107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:33 compute-1 sudo[79107]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:33 compute-1 sudo[79132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- raw list --format json
Jan 22 13:34:33 compute-1 sudo[79132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 13:34:33 compute-1 ceph-osd[79044]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 22 13:34:33 compute-1 ceph-osd[79044]: load: jerasure load: lrc 
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.472653127 +0000 UTC m=+0.027426106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.677371372 +0000 UTC m=+0.232144321 container create a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 13:34:33 compute-1 systemd[1]: Started libpod-conmon-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope.
Jan 22 13:34:33 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.790501956 +0000 UTC m=+0.345274905 container init a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.798836205 +0000 UTC m=+0.353609154 container start a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.802116605 +0000 UTC m=+0.356889594 container attach a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:34:33 compute-1 practical_euler[79222]: 167 167
Jan 22 13:34:33 compute-1 systemd[1]: libpod-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope: Deactivated successfully.
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.806977359 +0000 UTC m=+0.361750318 container died a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-e8742766199166f07ebecfc350152b22c5fdd5fcb68f14f9b0b79882f33183a7-merged.mount: Deactivated successfully.
Jan 22 13:34:33 compute-1 podman[79202]: 2026-01-22 13:34:33.839977278 +0000 UTC m=+0.394750227 container remove a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:34:33 compute-1 systemd[1]: libpod-conmon-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope: Deactivated successfully.
Jan 22 13:34:33 compute-1 ceph-osd[79044]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 13:34:33 compute-1 ceph-osd[79044]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluefs mount
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluefs mount shared_bdev_used = 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Git sha 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: DB SUMMARY
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: DB Session ID:  00CFL0TF3NW7HAFMQXJB
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                                     Options.env: 0x55b6f15f9c70
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                                Options.info_log: 0x55b6f07ecba0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.write_buffer_manager: 0x55b6f1702460
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.row_cache: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                              Options.wal_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.wal_compression: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.max_background_jobs: 4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Compression algorithms supported:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kZSTD supported: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e2430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b67f644f-16fd-42e9-98f5-fc9e121c20ca
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088873976203, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088873976456, "job": 1, "event": "recovery_finished"}
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 22 13:34:33 compute-1 ceph-osd[79044]: freelist init
Jan 22 13:34:33 compute-1 ceph-osd[79044]: freelist _read_cfg
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 13:34:33 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bluefs umount
Jan 22 13:34:33 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 13:34:34 compute-1 podman[79246]: 2026-01-22 13:34:34.05840422 +0000 UTC m=+0.106449791 container create 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 13:34:34 compute-1 podman[79246]: 2026-01-22 13:34:33.97993137 +0000 UTC m=+0.027976931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:34 compute-1 systemd[1]: Started libpod-conmon-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope.
Jan 22 13:34:34 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:34 compute-1 podman[79246]: 2026-01-22 13:34:34.188904892 +0000 UTC m=+0.236950473 container init 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:34 compute-1 podman[79246]: 2026-01-22 13:34:34.197435837 +0000 UTC m=+0.245481388 container start 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 13:34:34 compute-1 podman[79246]: 2026-01-22 13:34:34.201238532 +0000 UTC m=+0.249284083 container attach 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluefs mount
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluefs mount shared_bdev_used = 4718592
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Git sha 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: DB SUMMARY
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: DB Session ID:  00CFL0TF3NW7HAFMQXJA
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                                     Options.env: 0x55b6f082e380
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                                Options.info_log: 0x55b6f07ed460
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.write_buffer_manager: 0x55b6f1702460
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.row_cache: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                              Options.wal_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.wal_compression: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.max_background_jobs: 4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Compression algorithms supported:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kZSTD supported: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6f07e3770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b67f644f-16fd-42e9-98f5-fc9e121c20ca
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874253111, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874258050, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874261802, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874265174, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874266609, "job": 1, "event": "recovery_finished"}
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b6f17c9c00
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: DB pointer 0x55b6f16eba00
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 22 13:34:34 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:34:34 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 13:34:34 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 13:34:34 compute-1 ceph-osd[79044]: _get_class not permitted to load lua
Jan 22 13:34:34 compute-1 ceph-osd[79044]: _get_class not permitted to load sdk
Jan 22 13:34:34 compute-1 ceph-osd[79044]: _get_class not permitted to load test_remote_reads
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 load_pgs
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 load_pgs opened 0 pgs
Jan 22 13:34:34 compute-1 ceph-osd[79044]: osd.1 0 log_to_monitors true
Jan 22 13:34:34 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:34.725+0000 7f53dd1db740 -1 osd.1 0 log_to_monitors true
Jan 22 13:34:35 compute-1 musing_shockley[79456]: {
Jan 22 13:34:35 compute-1 musing_shockley[79456]:     "729e7fcc-4be0-4e65-a251-dac5739e2fea": {
Jan 22 13:34:35 compute-1 musing_shockley[79456]:         "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 13:34:35 compute-1 musing_shockley[79456]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 13:34:35 compute-1 musing_shockley[79456]:         "osd_id": 1,
Jan 22 13:34:35 compute-1 musing_shockley[79456]:         "osd_uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea",
Jan 22 13:34:35 compute-1 musing_shockley[79456]:         "type": "bluestore"
Jan 22 13:34:35 compute-1 musing_shockley[79456]:     }
Jan 22 13:34:35 compute-1 musing_shockley[79456]: }
Jan 22 13:34:35 compute-1 systemd[1]: libpod-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope: Deactivated successfully.
Jan 22 13:34:35 compute-1 podman[79246]: 2026-01-22 13:34:35.169991008 +0000 UTC m=+1.218036569 container died 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 13:34:35 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 13:34:35 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 13:34:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c-merged.mount: Deactivated successfully.
Jan 22 13:34:35 compute-1 podman[79246]: 2026-01-22 13:34:35.910171993 +0000 UTC m=+1.958217534 container remove 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:34:35 compute-1 systemd[1]: libpod-conmon-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope: Deactivated successfully.
Jan 22 13:34:35 compute-1 sudo[79132]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 sudo[79707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-1 sudo[79707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-1 sudo[79707]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 sudo[79732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:34:36 compute-1 sudo[79732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-1 sudo[79732]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 done with init, starting boot process
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 start_boot
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 13:34:36 compute-1 ceph-osd[79044]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 22 13:34:36 compute-1 sudo[79757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-1 sudo[79757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-1 sudo[79757]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 sudo[79782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:36 compute-1 sudo[79782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-1 sudo[79782]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 sudo[79807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-1 sudo[79807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-1 sudo[79807]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-1 sudo[79832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:34:36 compute-1 sudo[79832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-1 podman[79929]: 2026-01-22 13:34:37.394670046 +0000 UTC m=+0.192376247 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:37 compute-1 podman[79929]: 2026-01-22 13:34:37.625977062 +0000 UTC m=+0.423683233 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:34:37 compute-1 sudo[79832]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-1 sudo[79982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:37 compute-1 sudo[79982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-1 sudo[79982]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:38 compute-1 sudo[80007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:38 compute-1 sudo[80032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80032]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:34:38 compute-1 sudo[80057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80057]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:38 compute-1 sudo[80111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80111]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:38 compute-1 sudo[80136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80136]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-1 sudo[80161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:38 compute-1 sudo[80161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-1 sudo[80161]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:39 compute-1 sudo[80186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 13:34:39 compute-1 sudo[80186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.369838484 +0000 UTC m=+0.066429079 container create 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.325525785 +0000 UTC m=+0.022116400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:39 compute-1 systemd[1]: Started libpod-conmon-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope.
Jan 22 13:34:39 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.518182648 +0000 UTC m=+0.214773263 container init 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.525097968 +0000 UTC m=+0.221688563 container start 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 13:34:39 compute-1 gracious_villani[80266]: 167 167
Jan 22 13:34:39 compute-1 systemd[1]: libpod-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope: Deactivated successfully.
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.543502935 +0000 UTC m=+0.240093530 container attach 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.543970048 +0000 UTC m=+0.240560643 container died 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 13:34:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-8a47091d5d926327a44da33748ab42eaa3fa692d1769ca52a050ce5abe73d215-merged.mount: Deactivated successfully.
Jan 22 13:34:39 compute-1 podman[80250]: 2026-01-22 13:34:39.676731452 +0000 UTC m=+0.373322087 container remove 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:39 compute-1 systemd[1]: libpod-conmon-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope: Deactivated successfully.
Jan 22 13:34:39 compute-1 podman[80288]: 2026-01-22 13:34:39.829419215 +0000 UTC m=+0.049670329 container create e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:34:39 compute-1 systemd[1]: Started libpod-conmon-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope.
Jan 22 13:34:39 compute-1 podman[80288]: 2026-01-22 13:34:39.801735593 +0000 UTC m=+0.021986717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:34:39 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:34:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:34:39 compute-1 podman[80288]: 2026-01-22 13:34:39.954781396 +0000 UTC m=+0.175032530 container init e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:34:39 compute-1 podman[80288]: 2026-01-22 13:34:39.964612296 +0000 UTC m=+0.184863410 container start e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 13:34:39 compute-1 podman[80288]: 2026-01-22 13:34:39.980920725 +0000 UTC m=+0.201171869 container attach e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 13:34:41 compute-1 stoic_robinson[80305]: [
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:     {
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "available": false,
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "ceph_device": false,
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "lsm_data": {},
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "lvs": [],
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "path": "/dev/sr0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "rejected_reasons": [
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "Has a FileSystem",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "Insufficient space (<5GB)"
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         ],
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         "sys_api": {
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "actuators": null,
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "device_nodes": "sr0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "devname": "sr0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "human_readable_size": "482.00 KB",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "id_bus": "ata",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "model": "QEMU DVD-ROM",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "nr_requests": "2",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "parent": "/dev/sr0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "partitions": {},
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "path": "/dev/sr0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "removable": "1",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "rev": "2.5+",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "ro": "0",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "rotational": "1",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "sas_address": "",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "sas_device_handle": "",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "scheduler_mode": "mq-deadline",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "sectors": 0,
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "sectorsize": "2048",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "size": 493568.0,
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "support_discard": "2048",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "type": "disk",
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:             "vendor": "QEMU"
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:         }
Jan 22 13:34:41 compute-1 stoic_robinson[80305]:     }
Jan 22 13:34:41 compute-1 stoic_robinson[80305]: ]
Jan 22 13:34:41 compute-1 systemd[1]: libpod-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Deactivated successfully.
Jan 22 13:34:41 compute-1 podman[80288]: 2026-01-22 13:34:41.141422099 +0000 UTC m=+1.361673223 container died e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:34:41 compute-1 systemd[1]: libpod-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Consumed 1.170s CPU time.
Jan 22 13:34:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b-merged.mount: Deactivated successfully.
Jan 22 13:34:41 compute-1 podman[80288]: 2026-01-22 13:34:41.24387599 +0000 UTC m=+1.464127104 container remove e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:34:41 compute-1 systemd[1]: libpod-conmon-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Deactivated successfully.
Jan 22 13:34:41 compute-1 sudo[80186]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 13.546 iops: 3467.856 elapsed_sec: 0.865
Jan 22 13:34:42 compute-1 ceph-osd[79044]: log_channel(cluster) log [WRN] : OSD bench result of 3467.855722 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 0 waiting for initial osdmap
Jan 22 13:34:42 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:42.093+0000 7f53d915b640 -1 osd.1 0 waiting for initial osdmap
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 22 13:34:42 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:42.128+0000 7f53d4783640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 13:34:42 compute-1 ceph-osd[79044]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 22 13:34:43 compute-1 ceph-osd[79044]: osd.1 13 state: booting -> active
Jan 22 13:34:43 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 20 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=9.890140533s) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active pruub 26.615266800s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=9.890140533s) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown pruub 26.615266800s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.2( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.19( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.7( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.8( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.5( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.6( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.10( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.11( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.12( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.15( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.16( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.3( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.13( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.14( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.17( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.18( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.9( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.4( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=20/22 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:53 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:55 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:34:56 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:34:56 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Jan 22 13:34:56 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Jan 22 13:34:59 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Jan 22 13:35:00 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483595848s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.381401062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.485149384s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383068085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483470917s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.381401062s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.485075951s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383068085s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484004974s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382072449s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484086037s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382186890s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484064102s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382186890s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483953476s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382072449s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484031677s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382320404s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483535767s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.381839752s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484002113s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382320404s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483474731s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.381839752s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484015465s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382427216s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484289169s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382717133s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484019279s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382514954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483936310s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382427216s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483990669s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382514954s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484164238s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382698059s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484164238s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382717133s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484102249s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382698059s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484872818s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383556366s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484283447s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382980347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484254837s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382980347s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484431267s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383197784s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484824181s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383556366s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484666824s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383541107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484370232s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383197784s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484643936s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383541107s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484479904s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383563995s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484447479s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383563995s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:35:03 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Jan 22 13:35:03 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Jan 22 13:35:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 22 13:35:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 22 13:35:06 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 22 13:35:06 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 22 13:35:08 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 22 13:35:08 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 22 13:35:10 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 22 13:35:10 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 22 13:35:12 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 22 13:35:12 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 22 13:35:14 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 22 13:35:14 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 22 13:35:15 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 22 13:35:15 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 22 13:35:16 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 22 13:35:16 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 22 13:35:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Jan 22 13:35:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Jan 22 13:35:20 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Jan 22 13:35:20 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Jan 22 13:35:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 22 13:35:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 22 13:35:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 22 13:35:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 22 13:35:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 22 13:35:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 22 13:35:29 compute-1 sudo[81337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:29 compute-1 sudo[81337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:29 compute-1 sudo[81337]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:29 compute-1 sudo[81362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:29 compute-1 sudo[81362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:29 compute-1 sudo[81362]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:29 compute-1 sudo[81387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:29 compute-1 sudo[81387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:29 compute-1 sudo[81387]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:29 compute-1 sudo[81412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:29 compute-1 sudo[81412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.619604071 +0000 UTC m=+0.047952281 container create 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 13:35:29 compute-1 systemd[72521]: Starting Mark boot as successful...
Jan 22 13:35:29 compute-1 systemd[72521]: Finished Mark boot as successful.
Jan 22 13:35:29 compute-1 systemd[1]: Started libpod-conmon-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope.
Jan 22 13:35:29 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.597347778 +0000 UTC m=+0.025696018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.692749625 +0000 UTC m=+0.121097825 container init 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.701606126 +0000 UTC m=+0.129954346 container start 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.705373758 +0000 UTC m=+0.133721988 container attach 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:29 compute-1 clever_goldwasser[81497]: 167 167
Jan 22 13:35:29 compute-1 systemd[1]: libpod-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope: Deactivated successfully.
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.708329298 +0000 UTC m=+0.136677518 container died 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 13:35:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-780022ee69a15f64b9042e1cc2d8cdaaa6aa70e48d7686e4c571aa7dfed57839-merged.mount: Deactivated successfully.
Jan 22 13:35:29 compute-1 podman[81479]: 2026-01-22 13:35:29.747512901 +0000 UTC m=+0.175861111 container remove 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:29 compute-1 systemd[1]: libpod-conmon-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope: Deactivated successfully.
Jan 22 13:35:29 compute-1 podman[81515]: 2026-01-22 13:35:29.827678046 +0000 UTC m=+0.044685453 container create a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 13:35:29 compute-1 systemd[1]: Started libpod-conmon-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope.
Jan 22 13:35:29 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:35:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:29 compute-1 podman[81515]: 2026-01-22 13:35:29.806937043 +0000 UTC m=+0.023944530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:29 compute-1 podman[81515]: 2026-01-22 13:35:29.909794703 +0000 UTC m=+0.126802120 container init a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:29 compute-1 podman[81515]: 2026-01-22 13:35:29.916286989 +0000 UTC m=+0.133294396 container start a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:29 compute-1 podman[81515]: 2026-01-22 13:35:29.922116787 +0000 UTC m=+0.139124214 container attach a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:35:30 compute-1 systemd[1]: libpod-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope: Deactivated successfully.
Jan 22 13:35:30 compute-1 podman[81515]: 2026-01-22 13:35:30.007465123 +0000 UTC m=+0.224472560 container died a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:35:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3-merged.mount: Deactivated successfully.
Jan 22 13:35:30 compute-1 podman[81515]: 2026-01-22 13:35:30.046995195 +0000 UTC m=+0.264002602 container remove a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:30 compute-1 systemd[1]: libpod-conmon-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope: Deactivated successfully.
Jan 22 13:35:30 compute-1 systemd[1]: Reloading.
Jan 22 13:35:30 compute-1 systemd-rc-local-generator[81600]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:30 compute-1 systemd-sysv-generator[81604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:30 compute-1 systemd[1]: Reloading.
Jan 22 13:35:30 compute-1 systemd-rc-local-generator[81638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:30 compute-1 systemd-sysv-generator[81641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:30 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:35:31 compute-1 podman[81695]: 2026-01-22 13:35:30.936529182 +0000 UTC m=+0.043731486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:31 compute-1 podman[81695]: 2026-01-22 13:35:31.492834492 +0000 UTC m=+0.600036706 container create 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 13:35:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:31 compute-1 podman[81695]: 2026-01-22 13:35:31.704958446 +0000 UTC m=+0.812160710 container init 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 13:35:31 compute-1 podman[81695]: 2026-01-22 13:35:31.712420398 +0000 UTC m=+0.819622612 container start 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:35:31 compute-1 bash[81695]: 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0
Jan 22 13:35:31 compute-1 systemd[1]: Started Ceph mon.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:31 compute-1 ceph-mon[81715]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pidfile_write: ignore empty --pid-file
Jan 22 13:35:31 compute-1 ceph-mon[81715]: load: jerasure load: lrc 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Git sha 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: DB SUMMARY
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: DB Session ID:  61AVSUXQ8FJR5Z10R2GN
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                                     Options.env: 0x55f766f40c40
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                                Options.info_log: 0x55f7686b0fc0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                                 Options.wal_dir: 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                    Options.write_buffer_manager: 0x55f7686c0b40
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                               Options.row_cache: None
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                              Options.wal_filter: None
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.wal_compression: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.max_background_jobs: 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Compression algorithms supported:
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kZSTD supported: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:           Options.merge_operator: 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:        Options.compaction_filter: None
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f7686b0c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f7686a91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.compression: NoCompression
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.num_levels: 7
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b45e9535-17c1-4c17-af76-e2f7345eb341
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931764497, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931766494, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931766643, "job": 1, "event": "recovery_finished"}
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f7686d2e00
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: DB pointer 0x55f76875c000
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Jan 22 13:35:31 compute-1 ceph-mon[81715]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Jan 22 13:35:31 compute-1 ceph-mon[81715]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:31 compute-1 sudo[81412]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(???) e0 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).mds e2 new map
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:35:18.163248+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e32 e32: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 e33: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e14: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v65: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e15: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v67: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e16: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e17: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v70: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e18: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e19: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v73: 4 pgs: 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e20: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e21: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v76: 67 pgs: 63 unknown, 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e22: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v78: 68 pgs: 33 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e23: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e24: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v81: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.2 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.2 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e25: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e26: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v84: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.3 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.3 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e27: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v86: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.4 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.2 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.2 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.4 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e28: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.6 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.6 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e29: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v89: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.7 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.7 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.8 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.8 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e30: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v91: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.3 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.3 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.b scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.b scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.5 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.5 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.12 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.12 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e31: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v93: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.17 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.17 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.7 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.7 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v94: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e32: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.8 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.8 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v96: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.18 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.18 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.b scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.b scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.19 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.19 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1b scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1b scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.f scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.f scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1e scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1e scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v98: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1f scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 3.1f scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.11 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.11 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.14237 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.12 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.12 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v99: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.14 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.14 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1e scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: osdmap e33: 2 total, 2 up, 2 in
Jan 22 13:35:31 compute-1 ceph-mon[81715]: fsmap cephfs:0
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1e scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.14239 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v101: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.6 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.6 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.9 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.9 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.16 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.16 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v102: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1f scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1f scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.17 deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.17 deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.4 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.4 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v103: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.18 scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.18 scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.c scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.c scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1a scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1a scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v104: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.a deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.a deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v105: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Deploying daemon mon.compute-2 on compute-2
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.d deep-scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.d deep-scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 13:35:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2935446327' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:35:31 compute-1 ceph-mon[81715]: pgmap v106: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1c scrub starts
Jan 22 13:35:31 compute-1 ceph-mon[81715]: 2.1c scrub ok
Jan 22 13:35:31 compute-1 ceph-mon[81715]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Jan 22 13:35:34 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 22 13:35:34 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 22 13:35:35 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 22 13:35:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 22 13:35:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 22 13:35:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 22 13:35:37 compute-1 ceph-mon[81715]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Jan 22 13:35:37 compute-1 ceph-mon[81715]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 22 13:35:37 compute-1 ceph-mon[81715]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 13:35:37 compute-1 ceph-mon[81715]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 22 13:35:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-22T13:35:29.957405Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.16 scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.16 scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-0 calling monitor election
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-2 calling monitor election
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.1a scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.1a scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.10 scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.10 scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1 calling monitor election
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.15 scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 3.15 scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.15 scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.15 scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.1b scrub starts
Jan 22 13:35:41 compute-1 ceph-mon[81715]: 2.1b scrub ok
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 13:35:41 compute-1 ceph-mon[81715]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 13:35:41 compute-1 ceph-mon[81715]: fsmap cephfs:0
Jan 22 13:35:41 compute-1 ceph-mon[81715]: osdmap e33: 2 total, 2 up, 2 in
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 13:35:41 compute-1 ceph-mon[81715]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 13:35:41 compute-1 ceph-mon[81715]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 13:35:41 compute-1 ceph-mon[81715]:     fs cephfs is offline because no MDS is active for it.
Jan 22 13:35:41 compute-1 ceph-mon[81715]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 13:35:41 compute-1 ceph-mon[81715]:     fs cephfs has 0 MDS online, but wants 1
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:35:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1019919786 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 13:35:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:35:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:43 compute-1 ceph-mon[81715]: Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 13:35:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:43 compute-1 sudo[81754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:43 compute-1 sudo[81754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:43 compute-1 sudo[81754]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:43 compute-1 sudo[81779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:43 compute-1 sudo[81779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:43 compute-1 sudo[81779]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:43 compute-1 sudo[81804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:43 compute-1 sudo[81804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:43 compute-1 sudo[81804]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:43 compute-1 sudo[81829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:43 compute-1 sudo[81829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 22 13:35:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.166360765 +0000 UTC m=+0.049356220 container create 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 13:35:44 compute-1 systemd[1]: Started libpod-conmon-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope.
Jan 22 13:35:44 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.143480555 +0000 UTC m=+0.026476030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.247566128 +0000 UTC m=+0.130561603 container init 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.255346919 +0000 UTC m=+0.138342374 container start 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.258589857 +0000 UTC m=+0.141585312 container attach 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:44 compute-1 systemd[1]: libpod-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope: Deactivated successfully.
Jan 22 13:35:44 compute-1 gracious_carver[81909]: 167 167
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.262298607 +0000 UTC m=+0.145294062 container died 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:35:44 compute-1 conmon[81909]: conmon 13ea967c5cf8b2f1c6ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope/container/memory.events
Jan 22 13:35:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-8be87f9607915128146b043fd27fa9d4ef1c37b7d5b71fe47ee4a2fa8ce38499-merged.mount: Deactivated successfully.
Jan 22 13:35:44 compute-1 podman[81893]: 2026-01-22 13:35:44.296295949 +0000 UTC m=+0.179291404 container remove 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 13:35:44 compute-1 systemd[1]: libpod-conmon-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope: Deactivated successfully.
Jan 22 13:35:44 compute-1 systemd[1]: Reloading.
Jan 22 13:35:44 compute-1 systemd-rc-local-generator[81953]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:44 compute-1 systemd-sysv-generator[81957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:44 compute-1 ceph-mon[81715]: pgmap v114: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:35:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:44 compute-1 ceph-mon[81715]: Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 13:35:44 compute-1 systemd[1]: Reloading.
Jan 22 13:35:44 compute-1 systemd-sysv-generator[82000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:44 compute-1 systemd-rc-local-generator[81997]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:44 compute-1 systemd[1]: Starting Ceph mgr.compute-1.hzmatt for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:35:44 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 22 13:35:44 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 22 13:35:45 compute-1 podman[82053]: 2026-01-22 13:35:45.096866584 +0000 UTC m=+0.043154861 container create 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/lib/ceph/mgr/ceph-compute-1.hzmatt supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:45 compute-1 podman[82053]: 2026-01-22 13:35:45.157642743 +0000 UTC m=+0.103931050 container init 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 13:35:45 compute-1 podman[82053]: 2026-01-22 13:35:45.16342207 +0000 UTC m=+0.109710347 container start 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 13:35:45 compute-1 bash[82053]: 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df
Jan 22 13:35:45 compute-1 podman[82053]: 2026-01-22 13:35:45.078940388 +0000 UTC m=+0.025228685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:45 compute-1 systemd[1]: Started Ceph mgr.compute-1.hzmatt for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:45 compute-1 sudo[81829]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: pidfile_write: ignore empty --pid-file
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'alerts'
Jan 22 13:35:45 compute-1 ceph-mon[81715]: 3.11 scrub starts
Jan 22 13:35:45 compute-1 ceph-mon[81715]: 3.11 scrub ok
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 13:35:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:45 compute-1 ceph-mon[81715]: Deploying daemon crash.compute-2 on compute-2
Jan 22 13:35:45 compute-1 ceph-mon[81715]: pgmap v115: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'balancer'
Jan 22 13:35:45 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:45.642+0000 7fb431a22140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 13:35:45 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'cephadm'
Jan 22 13:35:45 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:45.923+0000 7fb431a22140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 13:35:46 compute-1 ceph-mon[81715]: 2.e scrub starts
Jan 22 13:35:46 compute-1 ceph-mon[81715]: 2.e scrub ok
Jan 22 13:35:46 compute-1 ceph-mon[81715]: 3.14 scrub starts
Jan 22 13:35:46 compute-1 ceph-mon[81715]: 3.14 scrub ok
Jan 22 13:35:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020052982 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 22 13:35:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 22 13:35:48 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'crash'
Jan 22 13:35:48 compute-1 ceph-mgr[82073]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'dashboard'
Jan 22 13:35:48 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:48.448+0000 7fb431a22140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 22 13:35:48 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 22 13:35:49 compute-1 ceph-mon[81715]: pgmap v116: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e34 e34: 2 total, 2 up, 2 in
Jan 22 13:35:50 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'devicehealth'
Jan 22 13:35:50 compute-1 ceph-mgr[82073]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 13:35:50 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 13:35:50 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:50.362+0000 7fb431a22140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 13:35:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 22 13:35:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 22 13:35:50 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 13:35:50 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 13:35:50 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]:   from numpy import show_config as show_numpy_config
Jan 22 13:35:50 compute-1 ceph-mgr[82073]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 13:35:50 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'influx'
Jan 22 13:35:50 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:50.960+0000 7fb431a22140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 13:35:51 compute-1 ceph-mgr[82073]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 13:35:51 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'insights'
Jan 22 13:35:51 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:51.231+0000 7fb431a22140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 13:35:51 compute-1 ceph-mon[81715]: 3.10 scrub starts
Jan 22 13:35:51 compute-1 ceph-mon[81715]: 3.10 scrub ok
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2143486171' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:51 compute-1 ceph-mon[81715]: 3.f scrub starts
Jan 22 13:35:51 compute-1 ceph-mon[81715]: 3.f scrub ok
Jan 22 13:35:51 compute-1 ceph-mon[81715]: pgmap v117: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2710829164' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:51 compute-1 ceph-mon[81715]: osdmap e34: 2 total, 2 up, 2 in
Jan 22 13:35:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:51 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'iostat'
Jan 22 13:35:51 compute-1 ceph-mgr[82073]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 13:35:51 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'k8sevents'
Jan 22 13:35:51 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:51.763+0000 7fb431a22140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 13:35:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e35 e35: 2 total, 2 up, 2 in
Jan 22 13:35:53 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'localpool'
Jan 22 13:35:53 compute-1 ceph-mon[81715]: 3.e scrub starts
Jan 22 13:35:53 compute-1 ceph-mon[81715]: 3.e scrub ok
Jan 22 13:35:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 13:35:53 compute-1 ceph-mon[81715]: pgmap v119: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 13:35:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e36 e36: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'mirroring'
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:54 compute-1 ceph-mon[81715]: osdmap e35: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: pgmap v121: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: 2.19 deep-scrub starts
Jan 22 13:35:54 compute-1 ceph-mon[81715]: 2.19 deep-scrub ok
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-1 ceph-mon[81715]: osdmap e36: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:55 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'nfs'
Jan 22 13:35:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e37 e37: 2 total, 2 up, 2 in
Jan 22 13:35:55 compute-1 ceph-mgr[82073]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 13:35:55 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'orchestrator'
Jan 22 13:35:55 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:55.931+0000 7fb431a22140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 13:35:56 compute-1 ceph-mon[81715]: pgmap v123: 131 pgs: 2 peering, 62 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 13:35:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 13:35:56 compute-1 ceph-mon[81715]: osdmap e37: 2 total, 2 up, 2 in
Jan 22 13:35:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e38 e38: 2 total, 2 up, 2 in
Jan 22 13:35:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 22 13:35:56 compute-1 ceph-mgr[82073]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 13:35:56 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 13:35:56 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:56.699+0000 7fb431a22140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 13:35:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'osd_support'
Jan 22 13:35:57 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.038+0000 7fb431a22140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 13:35:57 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.355+0000 7fb431a22140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mon[81715]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:57 compute-1 ceph-mon[81715]: osdmap e38: 2 total, 2 up, 2 in
Jan 22 13:35:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 13:35:57 compute-1 ceph-mon[81715]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 13:35:57 compute-1 ceph-mon[81715]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 13:35:57 compute-1 ceph-mon[81715]: osdmap e39: 3 total, 2 up, 3 in
Jan 22 13:35:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'progress'
Jan 22 13:35:57 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.670+0000 7fb431a22140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'prometheus'
Jan 22 13:35:57 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.993+0000 7fb431a22140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 13:35:58 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2302690487' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 13:35:58 compute-1 ceph-mon[81715]: pgmap v127: 146 pgs: 2 peering, 77 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:59 compute-1 ceph-mgr[82073]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 13:35:59 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:59.213+0000 7fb431a22140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 13:35:59 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'rbd_support'
Jan 22 13:35:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e40 e40: 3 total, 2 up, 3 in
Jan 22 13:35:59 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=9.400292397s) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 94.091590881s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:35:59 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=9.400292397s) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown pruub 94.091590881s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 13:35:59 compute-1 ceph-mgr[82073]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 13:35:59 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'restful'
Jan 22 13:35:59 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:59.549+0000 7fb431a22140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 13:35:59 compute-1 ceph-mon[81715]: from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:59 compute-1 ceph-mon[81715]: 5.1 deep-scrub starts
Jan 22 13:35:59 compute-1 ceph-mon[81715]: 5.1 deep-scrub ok
Jan 22 13:35:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:59 compute-1 ceph-mon[81715]: osdmap e40: 3 total, 2 up, 3 in
Jan 22 13:35:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:35:59 compute-1 ceph-mon[81715]: pgmap v129: 177 pgs: 2 peering, 108 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:59 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 22 13:35:59 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 22 13:36:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 e41: 3 total, 2 up, 3 in
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=40/41 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:00 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'rgw'
Jan 22 13:36:00 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 22 13:36:00 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 22 13:36:01 compute-1 ceph-mon[81715]: from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:36:01 compute-1 ceph-mon[81715]: 3.c scrub starts
Jan 22 13:36:01 compute-1 ceph-mon[81715]: 3.c scrub ok
Jan 22 13:36:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:01 compute-1 ceph-mon[81715]: osdmap e41: 3 total, 2 up, 3 in
Jan 22 13:36:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:01 compute-1 ceph-mon[81715]: 3.d scrub starts
Jan 22 13:36:01 compute-1 ceph-mon[81715]: 3.d scrub ok
Jan 22 13:36:01 compute-1 ceph-mgr[82073]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'rook'
Jan 22 13:36:01 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:01.221+0000 7fb431a22140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:03 compute-1 ceph-mon[81715]: 4.1 scrub starts
Jan 22 13:36:03 compute-1 ceph-mon[81715]: 4.1 scrub ok
Jan 22 13:36:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:03 compute-1 ceph-mon[81715]: pgmap v131: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/265572544' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:36:03 compute-1 ceph-mgr[82073]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 13:36:03 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'selftest'
Jan 22 13:36:03 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:03.831+0000 7fb431a22140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-1 ceph-mgr[82073]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:04.106+0000 7fb431a22140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'snap_schedule'
Jan 22 13:36:04 compute-1 ceph-mgr[82073]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'stats'
Jan 22 13:36:04 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:04.425+0000 7fb431a22140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'status'
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.002+0000 7fb431a22140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'telegraf'
Jan 22 13:36:05 compute-1 sshd-session[71269]: Received disconnect from 38.102.83.41 port 52510:11: disconnected by user
Jan 22 13:36:05 compute-1 sshd-session[71269]: Disconnected from user zuul 38.102.83.41 port 52510
Jan 22 13:36:05 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 13:36:05 compute-1 sshd-session[71266]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:36:05 compute-1 systemd[1]: session-19.scope: Consumed 9.404s CPU time.
Jan 22 13:36:05 compute-1 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Jan 22 13:36:05 compute-1 systemd-logind[787]: Removed session 19.
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'telemetry'
Jan 22 13:36:05 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.269+0000 7fb431a22140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-mon[81715]: pgmap v132: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:05 compute-1 ceph-mon[81715]: 4.2 scrub starts
Jan 22 13:36:05 compute-1 ceph-mon[81715]: 4.2 scrub ok
Jan 22 13:36:05 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 22 13:36:05 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.926+0000 7fb431a22140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 13:36:05 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 13:36:06 compute-1 ceph-mgr[82073]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 13:36:06 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:06.668+0000 7fb431a22140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 13:36:06 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'volumes'
Jan 22 13:36:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e42 e42: 3 total, 2 up, 3 in
Jan 22 13:36:07 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/459129720' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 13:36:07 compute-1 ceph-mon[81715]: pgmap v133: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:36:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-1 ceph-mon[81715]: Standby manager daemon compute-2.tjdsdx started
Jan 22 13:36:07 compute-1 ceph-mgr[82073]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 13:36:07 compute-1 ceph-mgr[82073]: mgr[py] Loading python module 'zabbix'
Jan 22 13:36:07 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:07.489+0000 7fb431a22140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.e( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1b( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.10( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1f( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1c( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.554998398s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.353668213s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.554976463s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.353668213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561773300s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360542297s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561741829s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360542297s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.562335014s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361251831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.562321663s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361251831s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561624527s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360671997s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561611176s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360671997s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561557770s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360694885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561546326s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360694885s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561418533s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360733032s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561405182s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360733032s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561334610s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360755920s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561320305s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360755920s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561245918s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360763550s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561233521s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360763550s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561607361s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361213684s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561594963s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361213684s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561145782s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360832214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561132431s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360832214s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561257362s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361030579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561244011s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361030579s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561180115s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361106873s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561167717s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361106873s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561090469s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361122131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561079025s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361122131s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561025620s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361145020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561012268s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361145020s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560912132s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361167908s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560898781s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361167908s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560844421s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361190796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560832977s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361190796s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560770988s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361206055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560759544s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361206055s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:07 compute-1 ceph-mgr[82073]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 13:36:07 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:07.792+0000 7fb431a22140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 13:36:07 compute-1 ceph-mgr[82073]: ms_deliver_dispatch: unhandled message 0x562bbfe2f600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Jan 22 13:36:07 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:08 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:09 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:10 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:10 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 22 13:36:10 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 22 13:36:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 e43: 3 total, 2 up, 3 in
Jan 22 13:36:11 compute-1 ceph-mon[81715]: 3.5 scrub starts
Jan 22 13:36:11 compute-1 ceph-mon[81715]: 3.5 scrub ok
Jan 22 13:36:11 compute-1 ceph-mon[81715]: 5.2 deep-scrub starts
Jan 22 13:36:11 compute-1 ceph-mon[81715]: 5.2 deep-scrub ok
Jan 22 13:36:11 compute-1 ceph-mon[81715]: pgmap v134: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/647988089' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-1 ceph-mon[81715]: osdmap e42: 3 total, 2 up, 3 in
Jan 22 13:36:11 compute-1 ceph-mon[81715]: mgrmap e10: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.tjdsdx", "id": "compute-2.tjdsdx"}]: dispatch
Jan 22 13:36:11 compute-1 ceph-mon[81715]: Standby manager daemon compute-1.hzmatt started
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1b( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.10( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1c( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1f( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:12 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 22 13:36:12 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 22 13:36:13 compute-1 ceph-mon[81715]: 4.3 deep-scrub starts
Jan 22 13:36:13 compute-1 ceph-mon[81715]: 4.3 deep-scrub ok
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/502293407' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 22 13:36:13 compute-1 ceph-mon[81715]: pgmap v136: 177 pgs: 38 peering, 139 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:13 compute-1 ceph-mon[81715]: 3.13 scrub starts
Jan 22 13:36:13 compute-1 ceph-mon[81715]: 3.13 scrub ok
Jan 22 13:36:13 compute-1 ceph-mon[81715]: pgmap v137: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-1 ceph-mon[81715]: osdmap e43: 3 total, 2 up, 3 in
Jan 22 13:36:13 compute-1 ceph-mon[81715]: mgrmap e11: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx, compute-1.hzmatt
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hzmatt", "id": "compute-1.hzmatt"}]: dispatch
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 13:36:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:13 compute-1 ceph-mon[81715]: Deploying daemon osd.2 on compute-2
Jan 22 13:36:15 compute-1 ceph-mon[81715]: 3.9 scrub starts
Jan 22 13:36:15 compute-1 ceph-mon[81715]: 3.9 scrub ok
Jan 22 13:36:15 compute-1 ceph-mon[81715]: pgmap v139: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:16 compute-1 ceph-mon[81715]: 5.3 scrub starts
Jan 22 13:36:16 compute-1 ceph-mon[81715]: 5.3 scrub ok
Jan 22 13:36:16 compute-1 ceph-mon[81715]: pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:17 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Jan 22 13:36:17 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Jan 22 13:36:18 compute-1 ceph-mon[81715]: pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 22 13:36:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 22 13:36:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 22 13:36:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 22 13:36:23 compute-1 ceph-mon[81715]: 3.a deep-scrub starts
Jan 22 13:36:23 compute-1 ceph-mon[81715]: 3.a deep-scrub ok
Jan 22 13:36:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 22 13:36:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 22 13:36:25 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 22 13:36:25 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 4.4 scrub starts
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 4.4 scrub ok
Jan 22 13:36:25 compute-1 ceph-mon[81715]: pgmap v142: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 3.1d scrub starts
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 3.1d scrub ok
Jan 22 13:36:25 compute-1 ceph-mon[81715]: pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 3.1c scrub starts
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 3.1c scrub ok
Jan 22 13:36:25 compute-1 ceph-mon[81715]: pgmap v144: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 7.1 scrub starts
Jan 22 13:36:25 compute-1 ceph-mon[81715]: 7.1 scrub ok
Jan 22 13:36:26 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Jan 22 13:36:26 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Jan 22 13:36:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 5.5 scrub starts
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 5.5 scrub ok
Jan 22 13:36:28 compute-1 ceph-mon[81715]: pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 4.6 scrub starts
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 4.6 scrub ok
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 7.5 scrub starts
Jan 22 13:36:28 compute-1 ceph-mon[81715]: 7.5 scrub ok
Jan 22 13:36:28 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 22 13:36:28 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 22 13:36:29 compute-1 ceph-mon[81715]: 7.7 deep-scrub starts
Jan 22 13:36:29 compute-1 ceph-mon[81715]: 7.7 deep-scrub ok
Jan 22 13:36:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:29 compute-1 ceph-mon[81715]: pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:30 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 22 13:36:30 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 22 13:36:31 compute-1 ceph-mon[81715]: 7.c scrub starts
Jan 22 13:36:31 compute-1 ceph-mon[81715]: 7.c scrub ok
Jan 22 13:36:31 compute-1 ceph-mon[81715]: pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:31 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 22 13:36:31 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 22 13:36:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e44 e44: 3 total, 2 up, 3 in
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 5.6 deep-scrub starts
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 5.6 deep-scrub ok
Jan 22 13:36:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 7.d scrub starts
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 7.d scrub ok
Jan 22 13:36:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:32 compute-1 ceph-mon[81715]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 13:36:32 compute-1 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 13:36:32 compute-1 ceph-mon[81715]: pgmap v148: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 7.11 scrub starts
Jan 22 13:36:32 compute-1 ceph-mon[81715]: 7.11 scrub ok
Jan 22 13:36:33 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 22 13:36:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e45 e45: 3 total, 2 up, 3 in
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.653626442s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.779006958s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.653626442s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.779006958s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794871330s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920425415s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.259327888s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384948730s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794871330s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.259327888s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384948730s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235723495s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361526489s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235723495s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235956192s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361801147s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235956192s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258738518s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384674072s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258738518s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384674072s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794334412s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920425415s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794334412s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258543968s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384719849s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258543968s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384719849s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794472694s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920593262s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794253349s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920516968s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794472694s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794253349s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652106285s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778671265s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652234077s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778808594s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652106285s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778671265s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235224724s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361862183s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652074814s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778793335s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257728577s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384506226s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652234077s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778808594s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.651782990s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778671265s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.651782990s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778671265s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235224724s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652074814s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778793335s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257728577s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384506226s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793506622s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920608521s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257336617s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384475708s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793506622s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257034302s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384429932s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257034302s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384429932s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647268295s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774719238s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647268295s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774719238s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.248373032s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.375930786s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.248373032s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.375930786s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647026062s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774658203s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647026062s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774658203s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257336617s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384475708s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793272972s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920959473s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.646950722s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774719238s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.646950722s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774719238s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.234349251s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.362182617s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.234349251s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793272972s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:33 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 22 13:36:33 compute-1 ceph-mon[81715]: 4.7 deep-scrub starts
Jan 22 13:36:33 compute-1 ceph-mon[81715]: 4.7 deep-scrub ok
Jan 22 13:36:33 compute-1 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 13:36:33 compute-1 ceph-mon[81715]: osdmap e44: 3 total, 2 up, 3 in
Jan 22 13:36:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:33 compute-1 ceph-mon[81715]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 13:36:33 compute-1 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 13:36:33 compute-1 ceph-mon[81715]: pgmap v150: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:35 compute-1 ceph-mon[81715]: purged_snaps scrub starts
Jan 22 13:36:35 compute-1 ceph-mon[81715]: purged_snaps scrub ok
Jan 22 13:36:35 compute-1 ceph-mon[81715]: 5.8 scrub starts
Jan 22 13:36:35 compute-1 ceph-mon[81715]: 5.8 scrub ok
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 13:36:35 compute-1 ceph-mon[81715]: osdmap e45: 3 total, 2 up, 3 in
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:35 compute-1 ceph-mon[81715]: 7.12 scrub starts
Jan 22 13:36:35 compute-1 ceph-mon[81715]: 7.12 scrub ok
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:35 compute-1 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 13:36:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:36 compute-1 ceph-mon[81715]: pgmap v152: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 22 13:36:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 22 13:36:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:37 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 22 13:36:37 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 22 13:36:38 compute-1 ceph-mon[81715]: 4.b deep-scrub starts
Jan 22 13:36:38 compute-1 ceph-mon[81715]: 4.b deep-scrub ok
Jan 22 13:36:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:38 compute-1 ceph-mon[81715]: pgmap v153: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:39 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 22 13:36:39 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 22 13:36:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e46 e46: 3 total, 2 up, 3 in
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 7.15 scrub starts
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 7.15 scrub ok
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 5.a scrub starts
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 5.a scrub ok
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 7.17 scrub starts
Jan 22 13:36:40 compute-1 ceph-mon[81715]: 7.17 scrub ok
Jan 22 13:36:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:40 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Jan 22 13:36:40 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Jan 22 13:36:41 compute-1 sudo[82109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:41 compute-1 sudo[82109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:41 compute-1 sudo[82109]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:41 compute-1 sudo[82134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:41 compute-1 sudo[82134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:41 compute-1 sudo[82134]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:41 compute-1 sudo[82159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:41 compute-1 sudo[82159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:41 compute-1 sudo[82159]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:41 compute-1 sudo[82184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:36:41 compute-1 sudo[82184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e47 e47: 3 total, 2 up, 3 in
Jan 22 13:36:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:41 compute-1 podman[82248]: 2026-01-22 13:36:41.909849263 +0000 UTC m=+0.051807796 container create 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:36:41 compute-1 systemd[1]: Started libpod-conmon-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope.
Jan 22 13:36:41 compute-1 podman[82248]: 2026-01-22 13:36:41.886012212 +0000 UTC m=+0.027970765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:41 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:36:41 compute-1 podman[82248]: 2026-01-22 13:36:41.993381896 +0000 UTC m=+0.135340439 container init 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 13:36:42 compute-1 podman[82248]: 2026-01-22 13:36:42.001108898 +0000 UTC m=+0.143067431 container start 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:42 compute-1 podman[82248]: 2026-01-22 13:36:42.005240071 +0000 UTC m=+0.147198584 container attach 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:36:42 compute-1 reverent_jones[82264]: 167 167
Jan 22 13:36:42 compute-1 systemd[1]: libpod-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope: Deactivated successfully.
Jan 22 13:36:42 compute-1 conmon[82264]: conmon 1ec786deb05b30425d0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope/container/memory.events
Jan 22 13:36:42 compute-1 podman[82248]: 2026-01-22 13:36:42.010226137 +0000 UTC m=+0.152184660 container died 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 13:36:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-7b2d05f86c171946c6d6de1189602d9516b6029f065f02e6c6bd13ed83c9e5e5-merged.mount: Deactivated successfully.
Jan 22 13:36:42 compute-1 podman[82248]: 2026-01-22 13:36:42.050909619 +0000 UTC m=+0.192868142 container remove 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:42 compute-1 systemd[1]: libpod-conmon-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope: Deactivated successfully.
Jan 22 13:36:42 compute-1 systemd[1]: Reloading.
Jan 22 13:36:42 compute-1 systemd-rc-local-generator[82310]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:42 compute-1 systemd-sysv-generator[82313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:42 compute-1 systemd[1]: Reloading.
Jan 22 13:36:42 compute-1 systemd-rc-local-generator[82349]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:42 compute-1 systemd-sysv-generator[82353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:42 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.thdhdp for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 5.c scrub starts
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 5.c scrub ok
Jan 22 13:36:42 compute-1 ceph-mon[81715]: pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 7.19 scrub starts
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 7.19 scrub ok
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 7.1a deep-scrub starts
Jan 22 13:36:42 compute-1 ceph-mon[81715]: 7.1a deep-scrub ok
Jan 22 13:36:42 compute-1 ceph-mon[81715]: osdmap e46: 3 total, 2 up, 3 in
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:42 compute-1 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 13:36:42 compute-1 ceph-mon[81715]: pgmap v156: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:42 compute-1 podman[82407]: 2026-01-22 13:36:42.937288093 +0000 UTC m=+0.050458330 container create 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 13:36:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.thdhdp supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:42 compute-1 podman[82407]: 2026-01-22 13:36:42.994796255 +0000 UTC m=+0.107966492 container init 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:43 compute-1 podman[82407]: 2026-01-22 13:36:43.002248328 +0000 UTC m=+0.115418545 container start 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 13:36:43 compute-1 bash[82407]: 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e
Jan 22 13:36:43 compute-1 podman[82407]: 2026-01-22 13:36:42.916039553 +0000 UTC m=+0.029209800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:43 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.thdhdp for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:36:43 compute-1 sudo[82184]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:43 compute-1 radosgw[82426]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:43 compute-1 radosgw[82426]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 22 13:36:43 compute-1 radosgw[82426]: framework: beast
Jan 22 13:36:43 compute-1 radosgw[82426]: framework conf key: endpoint, val: 192.168.122.101:8082
Jan 22 13:36:43 compute-1 radosgw[82426]: init_numa not setting numa affinity
Jan 22 13:36:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e48 e48: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 13:36:43 compute-1 ceph-mon[81715]: osdmap e47: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-1 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:43 compute-1 ceph-mon[81715]: OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-1 ceph-mon[81715]: osdmap e48: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:43 compute-1 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 13:36:43 compute-1 ceph-mon[81715]: pgmap v159: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 13:36:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 13:36:44 compute-1 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 13:36:45 compute-1 ceph-mon[81715]: osdmap e49: 3 total, 3 up, 3 in
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-1 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589950562s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589828014s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148709118s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148670778s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148662463s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148639068s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148291469s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148260996s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148221418s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589405537s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148026794s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589246511s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588890314s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147798270s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147768766s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147391483s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147337750s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588813782s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588687897s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:36:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588541985s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:36:45 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 22 13:36:45 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 22 13:36:46 compute-1 ceph-mon[81715]: pgmap v161: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 13:36:46 compute-1 ceph-mon[81715]: osdmap e50: 3 total, 3 up, 3 in
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-1 ceph-mon[81715]: 4.f scrub starts
Jan 22 13:36:46 compute-1 ceph-mon[81715]: 4.f scrub ok
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 22 13:36:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 22 13:36:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:47 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 22 13:36:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 13:36:47 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 22 13:36:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 13:36:47 compute-1 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:48 compute-1 ceph-mon[81715]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 13:36:48 compute-1 ceph-mon[81715]: 7.1c scrub starts
Jan 22 13:36:48 compute-1 ceph-mon[81715]: 7.1c scrub ok
Jan 22 13:36:48 compute-1 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 13:36:48 compute-1 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:48 compute-1 ceph-mon[81715]: 4.c scrub starts
Jan 22 13:36:48 compute-1 ceph-mon[81715]: 4.c scrub ok
Jan 22 13:36:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 13:36:49 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:49 compute-1 ceph-mon[81715]: pgmap v163: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 13:36:49 compute-1 ceph-mon[81715]: 4.10 deep-scrub starts
Jan 22 13:36:49 compute-1 ceph-mon[81715]: 4.10 deep-scrub ok
Jan 22 13:36:49 compute-1 ceph-mon[81715]: 6.e scrub starts
Jan 22 13:36:49 compute-1 ceph-mon[81715]: 6.e scrub ok
Jan 22 13:36:49 compute-1 ceph-mon[81715]: osdmap e51: 3 total, 3 up, 3 in
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-1 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-1 ceph-mon[81715]: osdmap e52: 3 total, 3 up, 3 in
Jan 22 13:36:49 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 22 13:36:49 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 22 13:36:50 compute-1 ceph-mon[81715]: 4.1d scrub starts
Jan 22 13:36:50 compute-1 ceph-mon[81715]: 4.1d scrub ok
Jan 22 13:36:50 compute-1 ceph-mon[81715]: pgmap v166: 180 pgs: 2 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-1 ceph-mon[81715]: 6.d scrub starts
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:50 compute-1 ceph-mon[81715]: 5.12 scrub starts
Jan 22 13:36:50 compute-1 ceph-mon[81715]: 5.12 scrub ok
Jan 22 13:36:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 22 13:36:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e3 new map
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:35:18.163248+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.zycvef{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 13:36:51 compute-1 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e4 new map
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:51.171709+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 22 13:36:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 4.11 scrub starts
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 4.11 scrub ok
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 6.d scrub ok
Jan 22 13:36:52 compute-1 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 6.5 scrub starts
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 6.5 scrub ok
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 4.12 scrub starts
Jan 22 13:36:52 compute-1 ceph-mon[81715]: 4.12 scrub ok
Jan 22 13:36:52 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:boot
Jan 22 13:36:52 compute-1 ceph-mon[81715]: daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 13:36:52 compute-1 ceph-mon[81715]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 13:36:52 compute-1 ceph-mon[81715]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 13:36:52 compute-1 ceph-mon[81715]: Cluster is now healthy
Jan 22 13:36:52 compute-1 ceph-mon[81715]: fsmap cephfs:0 1 up:standby
Jan 22 13:36:52 compute-1 ceph-mon[81715]: osdmap e53: 3 total, 3 up, 3 in
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zycvef"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:creating}
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-mon[81715]: pgmap v168: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 345 B/s wr, 4 op/s
Jan 22 13:36:52 compute-1 ceph-mon[81715]: daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 13:36:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e5 new map
Jan 22 13:36:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 22 13:36:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 13:36:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 13:36:52 compute-1 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:52 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 22 13:36:52 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 4.16 scrub starts
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 4.16 scrub ok
Jan 22 13:36:53 compute-1 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-1 ceph-mon[81715]: osdmap e54: 3 total, 3 up, 3 in
Jan 22 13:36:53 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 13:36:53 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active}
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 6.2 scrub starts
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 6.2 scrub ok
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 5.b scrub starts
Jan 22 13:36:53 compute-1 ceph-mon[81715]: 5.b scrub ok
Jan 22 13:36:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e6 new map
Jan 22 13:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 13:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e7 new map
Jan 22 13:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:55 compute-1 ceph-mon[81715]: pgmap v170: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 362 B/s wr, 4 op/s
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:55 compute-1 ceph-mon[81715]: osdmap e55: 3 total, 3 up, 3 in
Jan 22 13:36:55 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:boot
Jan 22 13:36:55 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 13:36:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zjixst"}]: dispatch
Jan 22 13:36:55 compute-1 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:55 compute-1 ceph-mon[81715]: Cluster is now healthy
Jan 22 13:36:55 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 13:36:55 compute-1 sudo[82497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:55 compute-1 sudo[82497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:55 compute-1 sudo[82497]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:55 compute-1 sudo[82522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:55 compute-1 sudo[82522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:55 compute-1 sudo[82522]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:55 compute-1 sudo[82547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:55 compute-1 sudo[82547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:55 compute-1 sudo[82547]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:55 compute-1 sudo[82572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:36:55 compute-1 sudo[82572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:55 compute-1 podman[82637]: 2026-01-22 13:36:55.953638598 +0000 UTC m=+0.046689207 container create a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 13:36:56 compute-1 systemd[1]: Started libpod-conmon-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope.
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:55.931922764 +0000 UTC m=+0.024973403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:56 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:56.060999282 +0000 UTC m=+0.154049911 container init a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:56.068481227 +0000 UTC m=+0.161531836 container start a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:56.072446205 +0000 UTC m=+0.165496824 container attach a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:56 compute-1 angry_cannon[82653]: 167 167
Jan 22 13:36:56 compute-1 systemd[1]: libpod-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope: Deactivated successfully.
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:56.076110494 +0000 UTC m=+0.169161093 container died a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-4ded85dc22e22ea18daa32eeb784729b41272a201922c487dd9028c6e2c71d72-merged.mount: Deactivated successfully.
Jan 22 13:36:56 compute-1 podman[82637]: 2026-01-22 13:36:56.11510316 +0000 UTC m=+0.208153769 container remove a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:56 compute-1 systemd[1]: libpod-conmon-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope: Deactivated successfully.
Jan 22 13:36:56 compute-1 systemd[1]: Reloading.
Jan 22 13:36:56 compute-1 radosgw[82426]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 13:36:56 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp[82422]: 2026-01-22T13:36:56.183+0000 7fdce17b9940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 13:36:56 compute-1 radosgw[82426]: framework: beast
Jan 22 13:36:56 compute-1 radosgw[82426]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 13:36:56 compute-1 radosgw[82426]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 13:36:56 compute-1 radosgw[82426]: starting handler: beast
Jan 22 13:36:56 compute-1 radosgw[82426]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:56 compute-1 systemd-rc-local-generator[82725]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:56 compute-1 systemd-sysv-generator[82753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:56 compute-1 radosgw[82426]: mgrc service_daemon_register rgw.24134 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.thdhdp,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9ef52632-dffc-43fe-ad78-aca5b0d3574d,zone_name=default,zonegroup_id=961906d1-4e51-43eb-bd43-c4a4ab081aea,zonegroup_name=default}
Jan 22 13:36:56 compute-1 systemd[1]: Reloading.
Jan 22 13:36:56 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 13:36:56 compute-1 systemd-rc-local-generator[83283]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:56 compute-1 systemd-sysv-generator[83286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:56 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.ofmmzj for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:36:56 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 13:36:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:57 compute-1 podman[83338]: 2026-01-22 13:36:56.998861024 +0000 UTC m=+0.040568211 container create 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 13:36:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:57 compute-1 ceph-mon[81715]: 5.d scrub starts
Jan 22 13:36:57 compute-1 ceph-mon[81715]: 5.d scrub ok
Jan 22 13:36:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:57 compute-1 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 13:36:57 compute-1 ceph-mon[81715]: pgmap v172: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 5.0 KiB/s wr, 20 op/s
Jan 22 13:36:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:36:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.ofmmzj supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:57 compute-1 podman[83338]: 2026-01-22 13:36:57.067993472 +0000 UTC m=+0.109700679 container init 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 13:36:57 compute-1 podman[83338]: 2026-01-22 13:36:57.073156564 +0000 UTC m=+0.114863751 container start 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:57 compute-1 bash[83338]: 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c
Jan 22 13:36:57 compute-1 podman[83338]: 2026-01-22 13:36:56.98188134 +0000 UTC m=+0.023588547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:57 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.ofmmzj for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:36:57 compute-1 sudo[82572]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:57 compute-1 ceph-mds[83358]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:57 compute-1 ceph-mds[83358]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 22 13:36:57 compute-1 ceph-mds[83358]: main not setting numa affinity
Jan 22 13:36:57 compute-1 ceph-mds[83358]: pidfile_write: ignore empty --pid-file
Jan 22 13:36:57 compute-1 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj[83354]: starting mds.cephfs.compute-1.ofmmzj at 
Jan 22 13:36:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 13:36:58 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 7 from mon.2
Jan 22 13:36:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 22 13:36:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 22 13:36:58 compute-1 ceph-mon[81715]: pgmap v173: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.5 KiB/s wr, 16 op/s
Jan 22 13:36:59 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Jan 22 13:36:59 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Jan 22 13:37:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e8 new map
Jan 22 13:37:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 13:37:00 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 8 from mon.2
Jan 22 13:37:00 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Monitors have assigned me to become a standby.
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 7.1d scrub starts
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 7.1d scrub ok
Jan 22 13:37:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:00 compute-1 ceph-mon[81715]: osdmap e56: 3 total, 3 up, 3 in
Jan 22 13:37:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 4.17 scrub starts
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 5.1b scrub starts
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 5.1b scrub ok
Jan 22 13:37:00 compute-1 ceph-mon[81715]: 4.17 scrub ok
Jan 22 13:37:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:00 compute-1 ceph-mon[81715]: pgmap v175: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.0 KiB/s wr, 14 op/s
Jan 22 13:37:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 3.0 scrub starts
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 3.0 scrub ok
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 5.14 scrub starts
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 5.14 scrub ok
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 4.d deep-scrub starts
Jan 22 13:37:01 compute-1 ceph-mon[81715]: 4.d deep-scrub ok
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:01 compute-1 ceph-mon[81715]: osdmap e57: 3 total, 3 up, 3 in
Jan 22 13:37:01 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:boot
Jan 22 13:37:01 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ofmmzj"}]: dispatch
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 22 13:37:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 22 13:37:03 compute-1 ceph-mon[81715]: 5.17 scrub starts
Jan 22 13:37:03 compute-1 ceph-mon[81715]: 5.17 scrub ok
Jan 22 13:37:03 compute-1 ceph-mon[81715]: pgmap v177: 181 pgs: 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 161 op/s
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:03 compute-1 ceph-mon[81715]: osdmap e58: 3 total, 3 up, 3 in
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 13:37:03 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Jan 22 13:37:03 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Jan 22 13:37:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e9 new map
Jan 22 13:37:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:37:03.744747+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:03 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 9 from mon.2
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 5.0 scrub starts
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 5.0 scrub ok
Jan 22 13:37:04 compute-1 ceph-mon[81715]: Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 6.3 scrub starts
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 6.3 scrub ok
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 6.1 scrub starts
Jan 22 13:37:04 compute-1 ceph-mon[81715]: 6.1 scrub ok
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:04 compute-1 ceph-mon[81715]: osdmap e59: 3 total, 3 up, 3 in
Jan 22 13:37:04 compute-1 ceph-mon[81715]: pgmap v180: 243 pgs: 62 unknown, 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 3.4 KiB/s wr, 200 op/s
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:04 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:standby
Jan 22 13:37:04 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 13:37:04 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 22 13:37:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 22 13:37:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 13:37:05 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 60 pg[10.0( v 58'96 (0'0,58'96] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.265120506s) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 58'95 active pruub 158.687652588s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:05 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 60 pg[10.0( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.265120506s) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 0'0 unknown pruub 158.687652588s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:05 compute-1 ceph-mon[81715]: 5.19 scrub starts
Jan 22 13:37:05 compute-1 ceph-mon[81715]: 5.f deep-scrub starts
Jan 22 13:37:05 compute-1 ceph-mon[81715]: 5.f deep-scrub ok
Jan 22 13:37:05 compute-1 ceph-mon[81715]: 5.19 scrub ok
Jan 22 13:37:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e10 new map
Jan 22 13:37:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:37:03.744747+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 13:37:06 compute-1 ceph-mon[81715]: 4.1e scrub starts
Jan 22 13:37:06 compute-1 ceph-mon[81715]: 4.1e scrub ok
Jan 22 13:37:06 compute-1 ceph-mon[81715]: 4.1a scrub starts
Jan 22 13:37:06 compute-1 ceph-mon[81715]: 4.1a scrub ok
Jan 22 13:37:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:06 compute-1 ceph-mon[81715]: osdmap e60: 3 total, 3 up, 3 in
Jan 22 13:37:06 compute-1 ceph-mon[81715]: pgmap v182: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 98 op/s
Jan 22 13:37:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.11( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.7( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1b( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.17( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.13( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.12( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.10( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1f( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1e( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1d( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1c( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1a( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.19( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.18( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.6( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.5( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.4( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.b( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.8( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.a( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.c( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.d( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.f( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.3( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.14( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.15( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.e( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.16( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.9( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.2( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.17( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1d( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1a( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1c( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.6( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.a( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.c( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.d( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.0( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.3( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.14( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.15( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.9( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.16( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.7( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:07 compute-1 ceph-mon[81715]: 5.1d deep-scrub starts
Jan 22 13:37:07 compute-1 ceph-mon[81715]: 5.1d deep-scrub ok
Jan 22 13:37:07 compute-1 ceph-mon[81715]: 4.19 scrub starts
Jan 22 13:37:07 compute-1 ceph-mon[81715]: 4.19 scrub ok
Jan 22 13:37:07 compute-1 ceph-mon[81715]: mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:standby
Jan 22 13:37:07 compute-1 ceph-mon[81715]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:07 compute-1 ceph-mon[81715]: osdmap e61: 3 total, 3 up, 3 in
Jan 22 13:37:07 compute-1 ceph-mon[81715]: pgmap v184: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 91 op/s
Jan 22 13:37:08 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 22 13:37:08 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 22 13:37:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:37:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:37:11 compute-1 ceph-mon[81715]: 4.1c scrub starts
Jan 22 13:37:11 compute-1 ceph-mon[81715]: 4.1c scrub ok
Jan 22 13:37:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 13:37:12 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 22 13:37:13 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.867439270s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.332916260s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.867372513s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.332916260s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.872615814s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338287354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.872380257s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338287354s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871800423s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337860107s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871774673s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337860107s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871617317s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337936401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871500015s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337844849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871585846s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337936401s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871476173s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337844849s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871587753s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338043213s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871541023s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338043213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871558189s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338073730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871539116s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338073730s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871469498s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338134766s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871451378s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338134766s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871421814s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338119507s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871317863s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338119507s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871380806s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338195801s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871359825s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338195801s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871323586s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338302612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871201515s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338027954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871301651s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338302612s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.3( v 61'99 (0'0,61'99] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871238708s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338348389s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870937347s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338027954s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.15( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871099472s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338363647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.15( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871060371s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338363647s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871049881s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338455200s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871059418s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338485718s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871023178s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338455200s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871041298s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338485718s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.14( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870669365s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338348389s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.3( v 61'99 (0'0,61'99] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871191978s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338348389s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.14( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870625496s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338348389s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.1b( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.8( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.14( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.10( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.19( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.12( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.17( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.18( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.4( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 5.1e scrub starts
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 5.1e scrub ok
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 5.7 scrub starts
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 5.7 scrub ok
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 7.a scrub starts
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 7.a scrub ok
Jan 22 13:37:13 compute-1 ceph-mon[81715]: pgmap v185: 305 pgs: 1 peering, 31 unknown, 2 active+clean+laggy, 271 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 82 op/s
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 6.4 scrub starts
Jan 22 13:37:13 compute-1 ceph-mon[81715]: 6.4 scrub ok
Jan 22 13:37:13 compute-1 ceph-mon[81715]: pgmap v186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 13:37:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 13:37:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:18 compute-1 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 13:37:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.6 scrub starts
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.6 scrub ok
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.9 scrub starts
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.7 scrub starts
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.7 scrub ok
Jan 22 13:37:19 compute-1 ceph-mon[81715]: 6.9 scrub ok
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:19 compute-1 ceph-mon[81715]: osdmap e62: 3 total, 3 up, 3 in
Jan 22 13:37:19 compute-1 ceph-mon[81715]: pgmap v188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 13:37:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.f( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.17( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.14( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.4( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.5( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=62/63 n=1 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.4( v 48'8 (0'0,48'8] local-lis/les=62/63 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.7( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.1b( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.18( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1b( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1d( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.8( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.12( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1c( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.10( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1e( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.12( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.14( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1a( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:19 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.19( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:20.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 22 13:37:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 22 13:37:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:22.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.b scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.b scrub ok
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 7.14 scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: pgmap v189: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 95 op/s
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.c scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.c scrub ok
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.f scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 6.f scrub ok
Jan 22 13:37:22 compute-1 ceph-mon[81715]: pgmap v190: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 0 B/s wr, 83 op/s
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 7.1b scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 7.1b scrub ok
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 7.13 scrub starts
Jan 22 13:37:22 compute-1 ceph-mon[81715]: 7.13 scrub ok
Jan 22 13:37:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 13:37:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:22 compute-1 ceph-mon[81715]: osdmap e63: 3 total, 3 up, 3 in
Jan 22 13:37:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 22 13:37:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 22 13:37:24 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Jan 22 13:37:24 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Jan 22 13:37:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 7.14 scrub ok
Jan 22 13:37:24 compute-1 ceph-mon[81715]: pgmap v192: 305 pgs: 1 active+clean+scrubbing, 61 peering, 2 active+clean+laggy, 241 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 50 op/s
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 7.10 scrub starts
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 7.10 scrub ok
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:24 compute-1 ceph-mon[81715]: Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 7.1e scrub starts
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 7.1e scrub ok
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-1 ceph-mon[81715]: pgmap v193: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 48 op/s; 0 B/s, 0 objects/s recovering
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 4.5 scrub starts
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 4.5 scrub ok
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-1 ceph-mon[81715]: osdmap e64: 3 total, 3 up, 3 in
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 4.e scrub starts
Jan 22 13:37:24 compute-1 ceph-mon[81715]: 4.e scrub ok
Jan 22 13:37:25 compute-1 sshd-session[83378]: Accepted publickey for zuul from 192.168.122.30 port 45054 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:37:25 compute-1 systemd-logind[787]: New session 33 of user zuul.
Jan 22 13:37:25 compute-1 systemd[1]: Started Session 33 of User zuul.
Jan 22 13:37:25 compute-1 sshd-session[83378]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:37:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:25 compute-1 ceph-mon[81715]: pgmap v195: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:37:25 compute-1 ceph-mon[81715]: 4.1b deep-scrub starts
Jan 22 13:37:25 compute-1 ceph-mon[81715]: 4.1b deep-scrub ok
Jan 22 13:37:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:26 compute-1 python3.9[83531]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:37:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:27 compute-1 ceph-mon[81715]: 4.14 scrub starts
Jan 22 13:37:27 compute-1 ceph-mon[81715]: 4.14 scrub ok
Jan 22 13:37:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 22 13:37:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 22 13:37:27 compute-1 sudo[83743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhmozqarmdvcjmifshwhmrugszdkrlan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089047.3967526-57-142192391228881/AnsiballZ_command.py'
Jan 22 13:37:27 compute-1 sudo[83743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:28 compute-1 python3.9[83745]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:37:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:28.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:29 compute-1 ceph-mon[81715]: pgmap v196: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 13:37:29 compute-1 ceph-mon[81715]: Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:29 compute-1 ceph-mon[81715]: 5.9 scrub starts
Jan 22 13:37:29 compute-1 ceph-mon[81715]: 5.9 scrub ok
Jan 22 13:37:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:29 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Jan 22 13:37:29 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Jan 22 13:37:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:29.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:31 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Jan 22 13:37:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:31 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Jan 22 13:37:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:32.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:32 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 22 13:37:33 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: pgmap v197: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 7.b scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 7.b scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 7.8 scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 7.8 scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 4.a deep-scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 4.a deep-scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: pgmap v198: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 129 B/s, 0 objects/s recovering
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 5.13 scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 5.13 scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 5.15 deep-scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: pgmap v199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 268 B/s, 0 objects/s recovering
Jan 22 13:37:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 5.15 deep-scrub ok
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 5.11 scrub starts
Jan 22 13:37:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:34.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 5.11 scrub ok
Jan 22 13:37:35 compute-1 ceph-mon[81715]: pgmap v200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 252 B/s, 0 objects/s recovering
Jan 22 13:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 13:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 13:37:35 compute-1 ceph-mon[81715]: osdmap e65: 3 total, 3 up, 3 in
Jan 22 13:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 13:37:35 compute-1 ceph-mon[81715]: Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 7.2 scrub starts
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 7.2 scrub ok
Jan 22 13:37:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 22 13:37:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 22 13:37:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 13:37:36 compute-1 ceph-mon[81715]: osdmap e66: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:36 compute-1 ceph-mon[81715]: pgmap v203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 174 B/s, 0 objects/s recovering
Jan 22 13:37:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 13:37:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 13:37:36 compute-1 ceph-mon[81715]: osdmap e67: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 13:37:37 compute-1 ceph-mon[81715]: 5.16 scrub starts
Jan 22 13:37:37 compute-1 ceph-mon[81715]: 5.16 scrub ok
Jan 22 13:37:37 compute-1 ceph-mon[81715]: 7.9 scrub starts
Jan 22 13:37:37 compute-1 ceph-mon[81715]: 7.9 scrub ok
Jan 22 13:37:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:37 compute-1 ceph-mon[81715]: osdmap e68: 3 total, 3 up, 3 in
Jan 22 13:37:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 13:37:37 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:37.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 22 13:37:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 22 13:37:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 13:37:38 compute-1 sudo[83743]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:39 compute-1 sshd-session[83381]: Connection closed by 192.168.122.30 port 45054
Jan 22 13:37:39 compute-1 sshd-session[83378]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:37:39 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Jan 22 13:37:39 compute-1 systemd[1]: session-33.scope: Consumed 8.941s CPU time.
Jan 22 13:37:39 compute-1 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Jan 22 13:37:39 compute-1 systemd-logind[787]: Removed session 33.
Jan 22 13:37:39 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 22 13:37:39 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 22 13:37:39 compute-1 ceph-mon[81715]: pgmap v206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:39 compute-1 ceph-mon[81715]: 7.e scrub starts
Jan 22 13:37:39 compute-1 ceph-mon[81715]: 7.e scrub ok
Jan 22 13:37:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:39 compute-1 ceph-mon[81715]: 5.1f scrub starts
Jan 22 13:37:39 compute-1 ceph-mon[81715]: 5.1f scrub ok
Jan 22 13:37:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 13:37:39 compute-1 ceph-mon[81715]: osdmap e69: 3 total, 3 up, 3 in
Jan 22 13:37:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 13:37:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:40.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 13:37:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:40 compute-1 ceph-mon[81715]: 6.a scrub starts
Jan 22 13:37:40 compute-1 ceph-mon[81715]: 6.a scrub ok
Jan 22 13:37:40 compute-1 ceph-mon[81715]: osdmap e70: 3 total, 3 up, 3 in
Jan 22 13:37:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-1 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 13:37:40 compute-1 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 13:37:40 compute-1 ceph-mon[81715]: Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 13:37:41 compute-1 ceph-mon[81715]: pgmap v209: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 4.9 scrub starts
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 4.9 scrub ok
Jan 22 13:37:41 compute-1 ceph-mon[81715]: osdmap e71: 3 total, 3 up, 3 in
Jan 22 13:37:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 5.e deep-scrub starts
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-1 ceph-mon[81715]: 5.e deep-scrub ok
Jan 22 13:37:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 13:37:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 13:37:42 compute-1 ceph-mon[81715]: pgmap v211: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:42 compute-1 ceph-mon[81715]: osdmap e72: 3 total, 3 up, 3 in
Jan 22 13:37:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:37:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:37:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:45 compute-1 ceph-mon[81715]: 7.3 deep-scrub starts
Jan 22 13:37:45 compute-1 ceph-mon[81715]: 7.3 deep-scrub ok
Jan 22 13:37:45 compute-1 ceph-mon[81715]: osdmap e73: 3 total, 3 up, 3 in
Jan 22 13:37:45 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 13:37:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:46 compute-1 sudo[83802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-1 sudo[83802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-1 sudo[83802]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-1 sudo[83827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:37:46 compute-1 sudo[83827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-1 sudo[83827]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-1 sudo[83852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-1 sudo[83852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-1 sudo[83852]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-1 sudo[83877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:46 compute-1 sudo[83877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-1 sudo[83877]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-1 sudo[83902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-1 sudo[83902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-1 sudo[83902]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-1 sudo[83927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:37:46 compute-1 sudo[83927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.6 scrub starts
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.6 scrub ok
Jan 22 13:37:47 compute-1 ceph-mon[81715]: pgmap v214: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.18 scrub starts
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.18 scrub ok
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.4 deep-scrub starts
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-1 ceph-mon[81715]: pgmap v215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.f scrub starts
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.4 deep-scrub ok
Jan 22 13:37:47 compute-1 ceph-mon[81715]: 7.f scrub ok
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-1 ceph-mon[81715]: osdmap e74: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 13:37:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 3314933000854323200, adjusting msgr requires
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 75 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 75 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 75 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:47 compute-1 podman[84020]: 2026-01-22 13:37:47.458021012 +0000 UTC m=+0.061207719 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 13:37:47 compute-1 podman[84020]: 2026-01-22 13:37:47.552384675 +0000 UTC m=+0.155571362 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:37:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:47 compute-1 sudo[83927]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:48 compute-1 sudo[84144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:48 compute-1 sudo[84144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:48 compute-1 sudo[84144]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:48 compute-1 sudo[84169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:48 compute-1 sudo[84169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:48 compute-1 sudo[84169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:48 compute-1 sudo[84194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:48 compute-1 sudo[84194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:48 compute-1 sudo[84194]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:48 compute-1 sudo[84219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:37:48 compute-1 sudo[84219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:48 compute-1 ceph-mon[81715]: 8.1 scrub starts
Jan 22 13:37:48 compute-1 ceph-mon[81715]: 8.1 scrub ok
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 13:37:48 compute-1 ceph-mon[81715]: osdmap e75: 3 total, 3 up, 3 in
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 13:37:48 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 13:37:48 compute-1 ceph-mon[81715]: osdmap e76: 3 total, 3 up, 3 in
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:48 compute-1 sudo[84219]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=62'697 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'697 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=61'698 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=61'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:50 compute-1 ceph-mon[81715]: pgmap v218: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 685 B/s wr, 53 op/s; 301 B/s, 10 objects/s recovering
Jan 22 13:37:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:50 compute-1 ceph-mon[81715]: 8.7 scrub starts
Jan 22 13:37:50 compute-1 ceph-mon[81715]: 8.7 scrub ok
Jan 22 13:37:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 22 13:37:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 22 13:37:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=61'698 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=61'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=62'703 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=62'703 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=77/78 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=61'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'697 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=77/78 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:51 compute-1 ceph-mon[81715]: pgmap v220: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 7.1f scrub starts
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 7.1f scrub ok
Jan 22 13:37:51 compute-1 ceph-mon[81715]: osdmap e77: 3 total, 3 up, 3 in
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 4.15 scrub starts
Jan 22 13:37:51 compute-1 ceph-mon[81715]: 4.15 scrub ok
Jan 22 13:37:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:51.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:52.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:53 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 22 13:37:53 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 22 13:37:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:53.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:55 compute-1 sshd-session[84274]: Accepted publickey for zuul from 192.168.122.30 port 39468 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:37:55 compute-1 systemd-logind[787]: New session 34 of user zuul.
Jan 22 13:37:55 compute-1 systemd[1]: Started Session 34 of User zuul.
Jan 22 13:37:55 compute-1 sshd-session[84274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:37:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:55.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 13:37:55 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 79 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=61'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:55 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 79 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=62'703 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:55 compute-1 python3.9[84427]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 13:37:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:56 compute-1 ceph-mon[81715]: 5.1c scrub starts
Jan 22 13:37:56 compute-1 ceph-mon[81715]: 5.1c scrub ok
Jan 22 13:37:56 compute-1 ceph-mon[81715]: 8.e deep-scrub starts
Jan 22 13:37:56 compute-1 ceph-mon[81715]: 8.e deep-scrub ok
Jan 22 13:37:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 13:37:56 compute-1 ceph-mon[81715]: pgmap v222: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 2 objects/s recovering
Jan 22 13:37:56 compute-1 ceph-mon[81715]: osdmap e78: 3 total, 3 up, 3 in
Jan 22 13:37:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 13:37:56 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:57 compute-1 python3.9[84601]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:37:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 13:37:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:57.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 4.1f scrub starts
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 4.1f scrub ok
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 4.8 scrub starts
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 5.18 scrub starts
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 5.18 scrub ok
Jan 22 13:37:58 compute-1 ceph-mon[81715]: pgmap v224: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:37:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 5.4 scrub starts
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-1 ceph-mon[81715]: pgmap v225: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 43 B/s, 4 objects/s recovering
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 4.8 scrub ok
Jan 22 13:37:58 compute-1 ceph-mon[81715]: 5.4 scrub ok
Jan 22 13:37:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 13:37:58 compute-1 ceph-mon[81715]: osdmap e79: 3 total, 3 up, 3 in
Jan 22 13:37:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:58 compute-1 sudo[84755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjggtoxzugdcfskqjdyaysrkramudbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089077.920974-94-108333959177218/AnsiballZ_command.py'
Jan 22 13:37:58 compute-1 sudo[84755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:58 compute-1 python3.9[84757]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:37:58 compute-1 sudo[84755]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:59 compute-1 sudo[84908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktrrwcwhaoynyxghdchhydjovinxrrca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089079.1192741-130-270614414851609/AnsiballZ_stat.py'
Jan 22 13:37:59 compute-1 sudo[84908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:37:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:59 compute-1 python3.9[84910]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:37:59 compute-1 sudo[84908]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:00 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 22 13:38:00 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 22 13:38:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:00.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 13:38:00 compute-1 sudo[85062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcbsgzunvvsvdgmsvudnarjvsfrwgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089080.380577-163-136165091101448/AnsiballZ_file.py'
Jan 22 13:38:00 compute-1 sudo[85062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:01 compute-1 python3.9[85064]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:38:01 compute-1 sudo[85062]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:01 compute-1 sudo[85214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pltcdmujdrisnwynhrpmbdpzskgtcgva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089081.3342526-190-104109004501647/AnsiballZ_file.py'
Jan 22 13:38:01 compute-1 sudo[85214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:01.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:01 compute-1 python3.9[85216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:38:01 compute-1 sudo[85214]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 22 13:38:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 22 13:38:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:03 compute-1 python3.9[85366]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:38:03 compute-1 network[85383]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:38:03 compute-1 network[85384]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:38:03 compute-1 network[85385]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:38:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:03 compute-1 ceph-mon[81715]: pgmap v227: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 4 objects/s recovering
Jan 22 13:38:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 13:38:03 compute-1 ceph-mon[81715]: osdmap e80: 3 total, 3 up, 3 in
Jan 22 13:38:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 22 13:38:04 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 22 13:38:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 13:38:05 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Jan 22 13:38:05 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 7.16 scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 7.16 scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:05 compute-1 ceph-mon[81715]: pgmap v229: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 5.1a scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 4.13 scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 4.13 scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: osdmap e81: 3 total, 3 up, 3 in
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 10.1e scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 5.1a scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 10.1e scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: pgmap v231: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.13 scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.13 scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 6.8 scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 6.8 scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.2 scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.1a deep-scrub starts
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.1a deep-scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: 8.2 scrub ok
Jan 22 13:38:05 compute-1 ceph-mon[81715]: pgmap v232: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:05 compute-1 ceph-mon[81715]: osdmap e82: 3 total, 3 up, 3 in
Jan 22 13:38:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:06.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 13:38:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 4.18 scrub starts
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 4.18 scrub ok
Jan 22 13:38:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 5.10 deep-scrub starts
Jan 22 13:38:08 compute-1 ceph-mon[81715]: 5.10 deep-scrub ok
Jan 22 13:38:08 compute-1 ceph-mon[81715]: pgmap v234: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:08.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 13:38:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:08 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Jan 22 13:38:09 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Jan 22 13:38:09 compute-1 sudo[85572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-1 sudo[85572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85572]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:38:09 compute-1 sudo[85610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85610]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-1 sudo[85657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85657]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:38:09 compute-1 sudo[85721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85721]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-1 sudo[85746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85746]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:09 compute-1 ceph-mon[81715]: osdmap e83: 3 total, 3 up, 3 in
Jan 22 13:38:09 compute-1 ceph-mon[81715]: pgmap v236: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:09 compute-1 ceph-mon[81715]: osdmap e84: 3 total, 3 up, 3 in
Jan 22 13:38:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:09 compute-1 sudo[85771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:09 compute-1 sudo[85771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85771]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 python3.9[85720]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:38:09 compute-1 sudo[85796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-1 sudo[85796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85796]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:09 compute-1 sudo[85833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85833]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-1 sudo[85870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-1 sudo[85870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-1 sudo[85870]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[85895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:10 compute-1 sudo[85895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[85895]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[85943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[85943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[85943]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[85991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:10 compute-1 sudo[85991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[85991]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86045]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:38:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:38:10 compute-1 sudo[86093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:10 compute-1 sudo[86093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86093]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86142]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 22 13:38:10 compute-1 sudo[86194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86194]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86219]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:38:10 compute-1 sudo[86244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86244]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86269]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 python3.9[86192]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:38:10 compute-1 sudo[86294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:38:10 compute-1 sudo[86294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86294]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86323]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:10 compute-1 sudo[86348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86348]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-1 sudo[86373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86373]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-1 sudo[86398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:10 compute-1 sudo[86398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-1 sudo[86398]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-1 sudo[86447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86447]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-1 sudo[86472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86472]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-1 sudo[86520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86520]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-1 sudo[86545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86545]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-1 sudo[86570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86570]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-1 sudo[86595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86595]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-1 sudo[86620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86620]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 sudo[86645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:11 compute-1 sudo[86645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-1 sudo[86645]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 13:38:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:11 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Jan 22 13:38:11 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Jan 22 13:38:12 compute-1 python3.9[86795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:38:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:12 compute-1 sudo[86951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsjviwencdyuwgqpvorelpjljdxcbie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089092.7045214-334-243292824157744/AnsiballZ_setup.py'
Jan 22 13:38:12 compute-1 sudo[86951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:13 compute-1 ceph-mon[81715]: 10.6 deep-scrub starts
Jan 22 13:38:13 compute-1 ceph-mon[81715]: 10.6 deep-scrub ok
Jan 22 13:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:38:13 compute-1 ceph-mon[81715]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-1 ceph-mon[81715]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-1 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-1 ceph-mon[81715]: pgmap v238: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:13 compute-1 python3.9[86953]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:38:13 compute-1 sudo[86951]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:13 compute-1 sudo[87035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbrgcqffiehsjquamzfvaigvmoqwjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089092.7045214-334-243292824157744/AnsiballZ_dnf.py'
Jan 22 13:38:13 compute-1 sudo[87035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:14 compute-1 python3.9[87037]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:38:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:15 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 22 13:38:15 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 22 13:38:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 13:38:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-1 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-1 ceph-mon[81715]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-1 ceph-mon[81715]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-1 ceph-mon[81715]: pgmap v239: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 142 B/s wr, 20 op/s; 9/215 objects misplaced (4.186%); 30 B/s, 1 objects/s recovering
Jan 22 13:38:16 compute-1 ceph-mon[81715]: osdmap e85: 3 total, 3 up, 3 in
Jan 22 13:38:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-1 ceph-mon[81715]: 10.7 deep-scrub starts
Jan 22 13:38:16 compute-1 ceph-mon[81715]: 10.7 deep-scrub ok
Jan 22 13:38:16 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:17.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:17 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 22 13:38:17 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 22 13:38:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-1 ceph-mon[81715]: pgmap v241: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 22 op/s; 9/215 objects misplaced (4.186%); 33 B/s, 1 objects/s recovering
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-1 ceph-mon[81715]: pgmap v242: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 9/213 objects misplaced (4.225%); 27 B/s, 0 objects/s recovering
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 10.9 scrub starts
Jan 22 13:38:18 compute-1 ceph-mon[81715]: 10.9 scrub ok
Jan 22 13:38:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:18 compute-1 ceph-mon[81715]: osdmap e86: 3 total, 3 up, 3 in
Jan 22 13:38:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:18 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 22 13:38:18 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 22 13:38:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:19.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 8.1d scrub starts
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 8.1d scrub ok
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-1 ceph-mon[81715]: pgmap v244: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 3/213 objects misplaced (1.408%); 27 B/s, 1 objects/s recovering
Jan 22 13:38:20 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 10.4 scrub starts
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 10.a scrub starts
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 10.a scrub ok
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 8.1e scrub starts
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 8.1e scrub ok
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 10.4 scrub ok
Jan 22 13:38:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-1 ceph-mon[81715]: osdmap e87: 3 total, 3 up, 3 in
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:38:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 13:38:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Jan 22 13:38:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 10.b scrub starts
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 10.b scrub ok
Jan 22 13:38:21 compute-1 ceph-mon[81715]: pgmap v246: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 3/214 objects misplaced (1.402%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 8.1c scrub starts
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 8.1c scrub ok
Jan 22 13:38:21 compute-1 ceph-mon[81715]: osdmap e88: 3 total, 3 up, 3 in
Jan 22 13:38:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 13:38:21 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 89 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:21 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 89 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 22 13:38:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 22 13:38:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:22 compute-1 ceph-mon[81715]: 10.c deep-scrub starts
Jan 22 13:38:22 compute-1 ceph-mon[81715]: 10.c deep-scrub ok
Jan 22 13:38:22 compute-1 ceph-mon[81715]: 9.2 scrub starts
Jan 22 13:38:22 compute-1 ceph-mon[81715]: 9.2 scrub ok
Jan 22 13:38:22 compute-1 ceph-mon[81715]: pgmap v248: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 22 13:38:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 13:38:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 13:38:22 compute-1 ceph-mon[81715]: osdmap e89: 3 total, 3 up, 3 in
Jan 22 13:38:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 13:38:22 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:22 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:22 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:22 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:22 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 22 13:38:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 22 13:38:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:24 compute-1 ceph-mon[81715]: 9.4 deep-scrub starts
Jan 22 13:38:24 compute-1 ceph-mon[81715]: 10.d scrub starts
Jan 22 13:38:24 compute-1 ceph-mon[81715]: 10.d scrub ok
Jan 22 13:38:24 compute-1 ceph-mon[81715]: 9.4 deep-scrub ok
Jan 22 13:38:24 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:24 compute-1 ceph-mon[81715]: osdmap e90: 3 total, 3 up, 3 in
Jan 22 13:38:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 13:38:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 13:38:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:24.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:24 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 22 13:38:25 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 22 13:38:25 compute-1 ceph-mon[81715]: 10.e scrub starts
Jan 22 13:38:25 compute-1 ceph-mon[81715]: 10.e scrub ok
Jan 22 13:38:25 compute-1 ceph-mon[81715]: pgmap v251: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 68 B/s, 2 objects/s recovering
Jan 22 13:38:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 13:38:25 compute-1 ceph-mon[81715]: osdmap e91: 3 total, 3 up, 3 in
Jan 22 13:38:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 13:38:25 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=0/0 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=62'714 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:25 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=0/0 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=62'714 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:25 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=61'690 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:25 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=61'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:25 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 22 13:38:26 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 22 13:38:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 13:38:26 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 93 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=61'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 10.16 scrub starts
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 10.16 scrub ok
Jan 22 13:38:26 compute-1 ceph-mon[81715]: osdmap e92: 3 total, 3 up, 3 in
Jan 22 13:38:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 8.16 deep-scrub starts
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:26 compute-1 ceph-mon[81715]: 8.16 deep-scrub ok
Jan 22 13:38:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:26 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 93 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=92/93 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=62'714 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:26 compute-1 sudo[87111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:26 compute-1 sudo[87111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-1 sudo[87111]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:26 compute-1 sudo[87136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:38:26 compute-1 sudo[87136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-1 sudo[87136]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:27 compute-1 ceph-mon[81715]: pgmap v254: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:27 compute-1 ceph-mon[81715]: 10.17 scrub starts
Jan 22 13:38:27 compute-1 ceph-mon[81715]: 10.17 scrub ok
Jan 22 13:38:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 13:38:27 compute-1 ceph-mon[81715]: osdmap e93: 3 total, 3 up, 3 in
Jan 22 13:38:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 13:38:28 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 94 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94) [1] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:28 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 94 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94) [1] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:28 compute-1 ceph-mon[81715]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 13:38:28 compute-1 ceph-mon[81715]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:29 compute-1 systemd[72521]: Created slice User Background Tasks Slice.
Jan 22 13:38:29 compute-1 systemd[72521]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 13:38:30 compute-1 systemd[72521]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 13:38:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:30 compute-1 ceph-mon[81715]: pgmap v256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:30 compute-1 ceph-mon[81715]: Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 13:38:30 compute-1 ceph-mon[81715]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 13:38:30 compute-1 ceph-mon[81715]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 13:38:30 compute-1 ceph-mon[81715]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 13:38:30 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:30 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 13:38:30 compute-1 ceph-mon[81715]: osdmap e94: 3 total, 3 up, 3 in
Jan 22 13:38:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:30 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 13:38:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 13:38:31 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:31 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:31 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:31 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:31 compute-1 ceph-mon[81715]: pgmap v258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 2 objects/s recovering
Jan 22 13:38:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:31 compute-1 ceph-mon[81715]: 9.c scrub starts
Jan 22 13:38:31 compute-1 ceph-mon[81715]: 9.c scrub ok
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:31 compute-1 ceph-mon[81715]: osdmap e95: 3 total, 3 up, 3 in
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:31 compute-1 ceph-mon[81715]: Reconfiguring osd.0 (monmap changed)...
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:31 compute-1 ceph-mon[81715]: Reconfiguring daemon osd.0 on compute-0
Jan 22 13:38:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:38:32 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Jan 22 13:38:32 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Jan 22 13:38:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 13:38:32 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 96 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96) [1] r=0 lpr=96 pi=[70,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:32 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 96 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96) [1] r=0 lpr=96 pi=[70,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-1 ceph-mon[81715]: pgmap v260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 2 objects/s recovering
Jan 22 13:38:32 compute-1 ceph-mon[81715]: 10.1a deep-scrub starts
Jan 22 13:38:32 compute-1 ceph-mon[81715]: 10.1a deep-scrub ok
Jan 22 13:38:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:38:32 compute-1 ceph-mon[81715]: osdmap e96: 3 total, 3 up, 3 in
Jan 22 13:38:33 compute-1 sudo[87204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:33 compute-1 sudo[87204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:33 compute-1 sudo[87204]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:33 compute-1 sudo[87229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:33 compute-1 sudo[87229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:33 compute-1 sudo[87229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:33 compute-1 sudo[87254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:33 compute-1 sudo[87254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:33 compute-1 sudo[87254]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:33 compute-1 sudo[87279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:33 compute-1 sudo[87279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.678991876 +0000 UTC m=+0.043809756 container create 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 13:38:33 compute-1 systemd[1]: Started libpod-conmon-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope.
Jan 22 13:38:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:33 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.660153384 +0000 UTC m=+0.024971284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.757867333 +0000 UTC m=+0.122685233 container init 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.765212107 +0000 UTC m=+0.130029987 container start 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.768317953 +0000 UTC m=+0.133135893 container attach 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 13:38:33 compute-1 modest_liskov[87337]: 167 167
Jan 22 13:38:33 compute-1 systemd[1]: libpod-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope: Deactivated successfully.
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.772467429 +0000 UTC m=+0.137285309 container died 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:38:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-d0e00fd8d55eb3c21c044fdddbae21f6ea7c462e97840cba23b1aafb8201b7ad-merged.mount: Deactivated successfully.
Jan 22 13:38:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:33.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:33 compute-1 podman[87321]: 2026-01-22 13:38:33.81180902 +0000 UTC m=+0.176626900 container remove 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 13:38:33 compute-1 systemd[1]: libpod-conmon-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope: Deactivated successfully.
Jan 22 13:38:33 compute-1 sudo[87279]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:35 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 22 13:38:35 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 22 13:38:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 22 13:38:36 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 22 13:38:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 13:38:36 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:36 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:36 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:36 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:36 compute-1 ceph-mon[81715]: 9.10 scrub starts
Jan 22 13:38:36 compute-1 ceph-mon[81715]: 9.10 scrub ok
Jan 22 13:38:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:36 compute-1 ceph-mon[81715]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 13:38:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:38:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:36 compute-1 ceph-mon[81715]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 13:38:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:36 compute-1 sudo[87355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:36 compute-1 sudo[87355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:36 compute-1 sudo[87355]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:36 compute-1 sudo[87380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:36 compute-1 sudo[87380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:36 compute-1 sudo[87380]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:36 compute-1 sudo[87405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:36 compute-1 sudo[87405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:36 compute-1 sudo[87405]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:36 compute-1 sudo[87432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:36 compute-1 sudo[87432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.762488168 +0000 UTC m=+0.042380416 container create ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 13:38:36 compute-1 systemd[1]: Started libpod-conmon-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope.
Jan 22 13:38:36 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.743336697 +0000 UTC m=+0.023228965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.84115618 +0000 UTC m=+0.121048448 container init ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.847432014 +0000 UTC m=+0.127324262 container start ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.853532973 +0000 UTC m=+0.133425241 container attach ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:38:36 compute-1 condescending_wright[87494]: 167 167
Jan 22 13:38:36 compute-1 systemd[1]: libpod-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope: Deactivated successfully.
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.856221137 +0000 UTC m=+0.136113385 container died ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:38:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-d7a9fbfaca9ec1bcbb7ca7623cc62de79d560ca7da0785484a7d63a764ac740f-merged.mount: Deactivated successfully.
Jan 22 13:38:36 compute-1 podman[87478]: 2026-01-22 13:38:36.902048639 +0000 UTC m=+0.181940887 container remove ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:38:36 compute-1 systemd[1]: libpod-conmon-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope: Deactivated successfully.
Jan 22 13:38:37 compute-1 sudo[87432]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-1 sudo[87519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:37 compute-1 sudo[87519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-1 sudo[87519]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-1 sudo[87544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:37 compute-1 sudo[87544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-1 sudo[87544]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-1 sudo[87569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:37 compute-1 sudo[87569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-1 sudo[87569]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-1 ceph-mon[81715]: pgmap v262: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.3 scrub starts
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 9.11 scrub starts
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.3 scrub ok
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 9.11 scrub ok
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.1c scrub starts
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.1c scrub ok
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-1 ceph-mon[81715]: pgmap v263: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.1d scrub starts
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 10.1d scrub ok
Jan 22 13:38:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-1 ceph-mon[81715]: osdmap e97: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-1 ceph-mon[81715]: Reconfiguring osd.1 (monmap changed)...
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:37 compute-1 ceph-mon[81715]: Reconfiguring daemon osd.1 on compute-1
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:37 compute-1 sudo[87594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:37 compute-1 sudo[87594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.564692557 +0000 UTC m=+0.037823419 container create 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:38:37 compute-1 systemd[1]: Started libpod-conmon-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope.
Jan 22 13:38:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 luod=0'0 crt=62'704 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:37 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:37 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=98/99 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.547895941 +0000 UTC m=+0.021026833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.645085557 +0000 UTC m=+0.118216449 container init 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.651153295 +0000 UTC m=+0.124284167 container start 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 13:38:37 compute-1 trusting_driscoll[87652]: 167 167
Jan 22 13:38:37 compute-1 systemd[1]: libpod-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope: Deactivated successfully.
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.655680121 +0000 UTC m=+0.128811033 container attach 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.656627107 +0000 UTC m=+0.129757969 container died 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:38:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-4813cf314fe975fdd84ef3949dd6a8999775a1f143ce594ea00f0239ce7008b3-merged.mount: Deactivated successfully.
Jan 22 13:38:37 compute-1 podman[87636]: 2026-01-22 13:38:37.691403652 +0000 UTC m=+0.164534514 container remove 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 13:38:37 compute-1 systemd[1]: libpod-conmon-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope: Deactivated successfully.
Jan 22 13:38:37 compute-1 sudo[87594]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:38.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:38 compute-1 ceph-mon[81715]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 13:38:38 compute-1 ceph-mon[81715]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 13:38:38 compute-1 ceph-mon[81715]: osdmap e98: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:38 compute-1 ceph-mon[81715]: osdmap e99: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 100 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:38 compute-1 sudo[87679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:38 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 100 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=99/100 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:38 compute-1 sudo[87679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-1 sudo[87679]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-1 sudo[87711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:38 compute-1 sudo[87711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-1 sudo[87711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:38 compute-1 sudo[87736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:38 compute-1 sudo[87736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-1 sudo[87736]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-1 sudo[87761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:38:38 compute-1 sudo[87761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:39 compute-1 podman[87857]: 2026-01-22 13:38:39.317572464 +0000 UTC m=+0.070260779 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:38:39 compute-1 ceph-mon[81715]: pgmap v266: 305 pgs: 2 remapped+peering, 2 unknown, 2 active+clean+laggy, 299 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:39 compute-1 ceph-mon[81715]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 13:38:39 compute-1 ceph-mon[81715]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 13:38:39 compute-1 ceph-mon[81715]: 9.14 scrub starts
Jan 22 13:38:39 compute-1 ceph-mon[81715]: 9.14 scrub ok
Jan 22 13:38:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:39 compute-1 ceph-mon[81715]: osdmap e100: 3 total, 3 up, 3 in
Jan 22 13:38:39 compute-1 ceph-mon[81715]: 10.11 deep-scrub starts
Jan 22 13:38:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:39 compute-1 ceph-mon[81715]: 10.11 deep-scrub ok
Jan 22 13:38:39 compute-1 podman[87857]: 2026-01-22 13:38:39.424829129 +0000 UTC m=+0.177517404 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 13:38:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:39 compute-1 sudo[87761]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:38:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:38:40 compute-1 ceph-mon[81715]: pgmap v269: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s; 137 B/s, 5 objects/s recovering
Jan 22 13:38:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:40 compute-1 ceph-mon[81715]: 9.1c scrub starts
Jan 22 13:38:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:40 compute-1 ceph-mon[81715]: 9.1c scrub ok
Jan 22 13:38:41 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 22 13:38:41 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 22 13:38:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:42 compute-1 ceph-mon[81715]: 11.2 scrub starts
Jan 22 13:38:42 compute-1 ceph-mon[81715]: 11.2 scrub ok
Jan 22 13:38:42 compute-1 ceph-mon[81715]: 10.1f scrub starts
Jan 22 13:38:42 compute-1 ceph-mon[81715]: 10.1f scrub ok
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:38:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:42.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 22 13:38:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 22 13:38:43 compute-1 ceph-mon[81715]: pgmap v270: 305 pgs: 1 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 300 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 4 objects/s recovering
Jan 22 13:38:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:44 compute-1 ceph-mon[81715]: 11.1e scrub starts
Jan 22 13:38:44 compute-1 ceph-mon[81715]: 11.1e scrub ok
Jan 22 13:38:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 13:38:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 13:38:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 101 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101) [1] r=0 lpr=101 pi=[59,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:45 compute-1 ceph-mon[81715]: pgmap v271: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 123 B/s, 5 objects/s recovering
Jan 22 13:38:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 13:38:45 compute-1 ceph-mon[81715]: osdmap e101: 3 total, 3 up, 3 in
Jan 22 13:38:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:45 compute-1 ceph-mon[81715]: 11.a deep-scrub starts
Jan 22 13:38:45 compute-1 ceph-mon[81715]: 11.a deep-scrub ok
Jan 22 13:38:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 13:38:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 22 13:38:46 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 22 13:38:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:46 compute-1 ceph-mon[81715]: 11.6 scrub starts
Jan 22 13:38:46 compute-1 ceph-mon[81715]: 11.6 scrub ok
Jan 22 13:38:46 compute-1 ceph-mon[81715]: pgmap v273: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 97 B/s, 4 objects/s recovering
Jan 22 13:38:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 13:38:46 compute-1 ceph-mon[81715]: osdmap e102: 3 total, 3 up, 3 in
Jan 22 13:38:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 13:38:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 103 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=103) [1] r=0 lpr=103 pi=[59,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 13:38:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:47 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 11.9 scrub starts
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 11.9 scrub ok
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 11.1d scrub starts
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 11.1d scrub ok
Jan 22 13:38:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 13:38:47 compute-1 ceph-mon[81715]: osdmap e103: 3 total, 3 up, 3 in
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 10.10 scrub starts
Jan 22 13:38:47 compute-1 ceph-mon[81715]: 10.10 scrub ok
Jan 22 13:38:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 105 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=104/105 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:48 compute-1 ceph-mon[81715]: pgmap v276: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 22 13:38:48 compute-1 ceph-mon[81715]: osdmap e104: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-1 ceph-mon[81715]: 10.f scrub starts
Jan 22 13:38:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:48 compute-1 ceph-mon[81715]: 10.f scrub ok
Jan 22 13:38:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:48 compute-1 ceph-mon[81715]: osdmap e105: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:48 compute-1 sudo[87989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:48 compute-1 sudo[87989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:48 compute-1 sudo[87989]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:48 compute-1 sudo[88014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:38:48 compute-1 sudo[88014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:48 compute-1 sudo[88014]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 22 13:38:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 22 13:38:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:51 compute-1 ceph-mon[81715]: 10.12 scrub starts
Jan 22 13:38:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:51 compute-1 ceph-mon[81715]: 10.12 scrub ok
Jan 22 13:38:51 compute-1 ceph-mon[81715]: 11.b scrub starts
Jan 22 13:38:51 compute-1 ceph-mon[81715]: 11.b scrub ok
Jan 22 13:38:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 22 13:38:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 22 13:38:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 13:38:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=0/0 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 luod=0'0 crt=62'701 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=0/0 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 crt=62'701 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:52 compute-1 ceph-mon[81715]: pgmap v279: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 8.1b scrub starts
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 8.1b scrub ok
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 11.1 scrub starts
Jan 22 13:38:52 compute-1 ceph-mon[81715]: 11.1 scrub ok
Jan 22 13:38:52 compute-1 ceph-mon[81715]: osdmap e106: 3 total, 3 up, 3 in
Jan 22 13:38:52 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 22 13:38:52 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 22 13:38:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 13:38:52 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 107 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=106/107 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 crt=62'701 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:53 compute-1 ceph-mon[81715]: pgmap v280: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:38:53 compute-1 ceph-mon[81715]: 8.9 scrub starts
Jan 22 13:38:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:53 compute-1 ceph-mon[81715]: 8.9 scrub ok
Jan 22 13:38:53 compute-1 ceph-mon[81715]: 8.8 scrub starts
Jan 22 13:38:53 compute-1 ceph-mon[81715]: 8.8 scrub ok
Jan 22 13:38:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:53 compute-1 ceph-mon[81715]: osdmap e107: 3 total, 3 up, 3 in
Jan 22 13:38:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:38:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:38:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 13:38:54 compute-1 ceph-mon[81715]: 10.1 scrub starts
Jan 22 13:38:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:54 compute-1 ceph-mon[81715]: 10.1 scrub ok
Jan 22 13:38:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 13:38:55 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Jan 22 13:38:55 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Jan 22 13:38:55 compute-1 ceph-mon[81715]: pgmap v283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:38:55 compute-1 ceph-mon[81715]: 11.c scrub starts
Jan 22 13:38:55 compute-1 ceph-mon[81715]: 11.c scrub ok
Jan 22 13:38:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 13:38:55 compute-1 ceph-mon[81715]: osdmap e108: 3 total, 3 up, 3 in
Jan 22 13:38:55 compute-1 ceph-mon[81715]: 11.8 scrub starts
Jan 22 13:38:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:55 compute-1 ceph-mon[81715]: 11.8 scrub ok
Jan 22 13:38:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:55.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 13:38:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:56 compute-1 ceph-mon[81715]: 8.14 deep-scrub starts
Jan 22 13:38:56 compute-1 ceph-mon[81715]: 8.14 deep-scrub ok
Jan 22 13:38:56 compute-1 ceph-mon[81715]: pgmap v285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 22 13:38:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 13:38:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 13:38:56 compute-1 ceph-mon[81715]: osdmap e109: 3 total, 3 up, 3 in
Jan 22 13:38:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:56 compute-1 ceph-mon[81715]: 11.d scrub starts
Jan 22 13:38:56 compute-1 ceph-mon[81715]: 11.d scrub ok
Jan 22 13:38:57 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 22 13:38:57 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 22 13:38:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 13:38:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:58 compute-1 ceph-mon[81715]: 8.10 scrub starts
Jan 22 13:38:58 compute-1 ceph-mon[81715]: 8.10 scrub ok
Jan 22 13:38:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 13:38:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 22 13:38:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 22 13:38:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:58 compute-1 sudo[87035]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:59 compute-1 ceph-mon[81715]: pgmap v287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 13:38:59 compute-1 ceph-mon[81715]: osdmap e110: 3 total, 3 up, 3 in
Jan 22 13:38:59 compute-1 ceph-mon[81715]: 11.5 scrub starts
Jan 22 13:38:59 compute-1 ceph-mon[81715]: 11.5 scrub ok
Jan 22 13:38:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:38:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:59.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 13:39:00 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111) [1] r=0 lpr=111 pi=[72,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:00 compute-1 ceph-mon[81715]: 11.10 scrub starts
Jan 22 13:39:00 compute-1 ceph-mon[81715]: 11.10 scrub ok
Jan 22 13:39:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 13:39:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:01 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:01 compute-1 ceph-mon[81715]: pgmap v289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 13:39:01 compute-1 ceph-mon[81715]: osdmap e111: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-1 ceph-mon[81715]: osdmap e112: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 22 13:39:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:02 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 22 13:39:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 13:39:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:02 compute-1 ceph-mon[81715]: 11.11 scrub starts
Jan 22 13:39:02 compute-1 ceph-mon[81715]: 11.11 scrub ok
Jan 22 13:39:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 13:39:02 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 113 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=8.887313843s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=58'684 mlcod 0'0 active pruub 276.769866943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:02 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 113 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=8.887105942s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=58'684 mlcod 0'0 unknown NOTIFY pruub 276.769866943s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 13:39:02 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 114 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:02 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 114 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:03 compute-1 ceph-mon[81715]: pgmap v292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:03 compute-1 ceph-mon[81715]: 11.7 scrub starts
Jan 22 13:39:03 compute-1 ceph-mon[81715]: 11.7 scrub ok
Jan 22 13:39:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 13:39:03 compute-1 ceph-mon[81715]: osdmap e113: 3 total, 3 up, 3 in
Jan 22 13:39:03 compute-1 ceph-mon[81715]: osdmap e114: 3 total, 3 up, 3 in
Jan 22 13:39:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:03.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:39:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:39:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:05.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:06 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 22 13:39:06 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 22 13:39:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:39:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:39:10 compute-1 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 13:39:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 13:39:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:11 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] async=[2] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:11 compute-1 ceph-mon[81715]: pgmap v295: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:11 compute-1 ceph-mon[81715]: 11.15 scrub starts
Jan 22 13:39:11 compute-1 ceph-mon[81715]: 11.15 scrub ok
Jan 22 13:39:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:11.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 13:39:12 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.448846817s) [2] async=[2] r=-1 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 58'684 active pruub 292.724243164s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:12 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.448719978s) [2] r=-1 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 unknown NOTIFY pruub 292.724243164s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:12 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=115/116 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: pgmap v296: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 8.19 scrub starts
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 8.19 scrub ok
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 8.6 scrub starts
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 8.6 scrub ok
Jan 22 13:39:13 compute-1 ceph-mon[81715]: pgmap v297: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 13:39:13 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 11.18 deep-scrub starts
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 11.18 deep-scrub ok
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: pgmap v298: 305 pgs: 1 remapped+peering, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-1 ceph-mon[81715]: osdmap e115: 3 total, 3 up, 3 in
Jan 22 13:39:13 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 22 13:39:13 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 22 13:39:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:13.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:14 compute-1 ceph-mon[81715]: pgmap v300: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:14 compute-1 ceph-mon[81715]: osdmap e116: 3 total, 3 up, 3 in
Jan 22 13:39:14 compute-1 ceph-mon[81715]: 11.4 scrub starts
Jan 22 13:39:14 compute-1 ceph-mon[81715]: 11.4 scrub ok
Jan 22 13:39:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:14 compute-1 sudo[88188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfffrvvevmlepgdwmzrxfkuogmkcziz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089154.1205125-370-28257597801382/AnsiballZ_command.py'
Jan 22 13:39:14 compute-1 sudo[88188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:14 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 22 13:39:14 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 22 13:39:14 compute-1 python3.9[88190]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:39:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 13:39:15 compute-1 sudo[88188]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:15 compute-1 ceph-mon[81715]: pgmap v302: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:15 compute-1 ceph-mon[81715]: 8.1f deep-scrub starts
Jan 22 13:39:15 compute-1 ceph-mon[81715]: 8.1f deep-scrub ok
Jan 22 13:39:15 compute-1 ceph-mon[81715]: 11.f scrub starts
Jan 22 13:39:15 compute-1 ceph-mon[81715]: 11.f scrub ok
Jan 22 13:39:15 compute-1 ceph-mon[81715]: osdmap e117: 3 total, 3 up, 3 in
Jan 22 13:39:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:15.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:16 compute-1 sudo[88475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksakrftwxljkqwpbkihsxegvoyvwtuxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089155.706808-394-99557824259153/AnsiballZ_selinux.py'
Jan 22 13:39:16 compute-1 sudo[88475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:16 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 22 13:39:16 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 22 13:39:16 compute-1 python3.9[88477]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 13:39:16 compute-1 sudo[88475]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:16 compute-1 ceph-mon[81715]: pgmap v304: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:17 compute-1 sudo[88627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtlgpbmqptwkkvdcafhjszipfogcacwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089157.1477797-427-161783459434710/AnsiballZ_command.py'
Jan 22 13:39:17 compute-1 sudo[88627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:17 compute-1 python3.9[88629]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 13:39:17 compute-1 sudo[88627]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 13:39:17 compute-1 ceph-mon[81715]: 11.1c scrub starts
Jan 22 13:39:17 compute-1 ceph-mon[81715]: 11.1c scrub ok
Jan 22 13:39:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:17 compute-1 ceph-mon[81715]: 11.1f scrub starts
Jan 22 13:39:17 compute-1 ceph-mon[81715]: 11.1f scrub ok
Jan 22 13:39:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 13:39:17 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:17.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:18 compute-1 sudo[88779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvrjnekzpolkcgkaafmdgzlartthmtxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089157.9408891-451-177782531530064/AnsiballZ_file.py'
Jan 22 13:39:18 compute-1 sudo[88779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:18 compute-1 python3.9[88781]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:39:18 compute-1 sudo[88779]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:19 compute-1 sudo[88931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmifnefjnuqbsuemwzcrjnqebvxpkahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089158.6893733-475-237406803439153/AnsiballZ_mount.py'
Jan 22 13:39:19 compute-1 sudo[88931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:19 compute-1 python3.9[88933]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 13:39:19 compute-1 sudo[88931]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:19 compute-1 ceph-mon[81715]: pgmap v305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 13:39:19 compute-1 ceph-mon[81715]: 11.3 scrub starts
Jan 22 13:39:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:19 compute-1 ceph-mon[81715]: 11.3 scrub ok
Jan 22 13:39:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 13:39:19 compute-1 ceph-mon[81715]: osdmap e118: 3 total, 3 up, 3 in
Jan 22 13:39:19 compute-1 ceph-mon[81715]: 10.14 scrub starts
Jan 22 13:39:19 compute-1 ceph-mon[81715]: 10.14 scrub ok
Jan 22 13:39:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 22 13:39:19 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 22 13:39:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:19.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:20 compute-1 sudo[89083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewpfsnpfccjiuygivlnkopomcgvzzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089160.6133718-559-31581062023318/AnsiballZ_file.py'
Jan 22 13:39:20 compute-1 sudo[89083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 8.11 scrub starts
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 8.11 scrub ok
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 8.12 scrub starts
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 8.12 scrub ok
Jan 22 13:39:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 11.19 scrub starts
Jan 22 13:39:20 compute-1 ceph-mon[81715]: 11.19 scrub ok
Jan 22 13:39:21 compute-1 python3.9[89085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:21 compute-1 sudo[89083]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 22 13:39:21 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 22 13:39:21 compute-1 sudo[89235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhamaelajekdjvvfllkxqvowoagtblvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089161.4624379-583-159227671344335/AnsiballZ_stat.py'
Jan 22 13:39:21 compute-1 sudo[89235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:21 compute-1 python3.9[89237]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:21 compute-1 sudo[89235]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 13:39:22 compute-1 ceph-mon[81715]: pgmap v307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Jan 22 13:39:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 13:39:22 compute-1 ceph-mon[81715]: osdmap e119: 3 total, 3 up, 3 in
Jan 22 13:39:22 compute-1 ceph-mon[81715]: 11.12 scrub starts
Jan 22 13:39:22 compute-1 ceph-mon[81715]: 11.12 scrub ok
Jan 22 13:39:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 13:39:22 compute-1 sudo[89313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcgttnvjyfnaedqvajfgiqqktiiwzmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089161.4624379-583-159227671344335/AnsiballZ_file.py'
Jan 22 13:39:22 compute-1 sudo[89313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:22 compute-1 python3.9[89315]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:39:22 compute-1 sudo[89313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-1 ceph-mon[81715]: pgmap v309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 13:39:23 compute-1 ceph-mon[81715]: osdmap e120: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:23 compute-1 ceph-mon[81715]: osdmap e121: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 22 13:39:23 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 22 13:39:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 122 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=122 pruub=14.605878830s) [0] r=-1 lpr=122 pi=[92,122)/1 crt=61'690 mlcod 0'0 active pruub 303.567810059s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:23 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 122 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=122 pruub=14.605822563s) [0] r=-1 lpr=122 pi=[92,122)/1 crt=61'690 mlcod 0'0 unknown NOTIFY pruub 303.567810059s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:23 compute-1 sudo[89465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqyihpijozcgmzqvyxyfyolynrepiwkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089163.547496-646-97627650747367/AnsiballZ_stat.py'
Jan 22 13:39:23 compute-1 sudo[89465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:24 compute-1 python3.9[89467]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:24 compute-1 sudo[89465]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:24 compute-1 ceph-mon[81715]: 8.b deep-scrub starts
Jan 22 13:39:24 compute-1 ceph-mon[81715]: 8.b deep-scrub ok
Jan 22 13:39:24 compute-1 ceph-mon[81715]: 11.1a scrub starts
Jan 22 13:39:24 compute-1 ceph-mon[81715]: 11.1a scrub ok
Jan 22 13:39:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 13:39:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 13:39:24 compute-1 ceph-mon[81715]: osdmap e122: 3 total, 3 up, 3 in
Jan 22 13:39:25 compute-1 sudo[89619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjkkdtddmkmwgakjyiuuabwzyvdrukm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089164.880194-685-84574165093094/AnsiballZ_getent.py'
Jan 22 13:39:25 compute-1 sudo[89619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:25 compute-1 python3.9[89621]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 13:39:25 compute-1 sudo[89619]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 13:39:26 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 123 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:26 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 123 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:26 compute-1 sudo[89772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaifauaconojcetlphyrzyqueglbkifv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089166.0516434-715-56296481216457/AnsiballZ_getent.py'
Jan 22 13:39:26 compute-1 sudo[89772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:26 compute-1 python3.9[89774]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 13:39:26 compute-1 sudo[89772]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:27 compute-1 ceph-mon[81715]: pgmap v312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:27 compute-1 ceph-mon[81715]: 8.d scrub starts
Jan 22 13:39:27 compute-1 ceph-mon[81715]: 8.d scrub ok
Jan 22 13:39:27 compute-1 ceph-mon[81715]: 10.13 scrub starts
Jan 22 13:39:27 compute-1 ceph-mon[81715]: 10.13 scrub ok
Jan 22 13:39:27 compute-1 ceph-mon[81715]: osdmap e123: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-1 sudo[89925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjrzfgectcaboddwfqhqmopueqfysrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089166.8601308-739-143076587117992/AnsiballZ_group.py'
Jan 22 13:39:27 compute-1 sudo[89925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-1 python3.9[89927]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:39:27 compute-1 sudo[89925]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 22 13:39:27 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 22 13:39:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 125 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] async=[0] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:28 compute-1 sudo[90077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfmpdvtgeymdbgaceewblzfglqnlorp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089167.8667533-766-273189906806274/AnsiballZ_file.py'
Jan 22 13:39:28 compute-1 sudo[90077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:28 compute-1 python3.9[90079]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 13:39:28 compute-1 sudo[90077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 11.e scrub starts
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 11.e scrub ok
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 10.15 scrub starts
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 10.15 scrub ok
Jan 22 13:39:28 compute-1 ceph-mon[81715]: pgmap v315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 13:39:28 compute-1 ceph-mon[81715]: osdmap e124: 3 total, 3 up, 3 in
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 11.14 scrub starts
Jan 22 13:39:28 compute-1 ceph-mon[81715]: 11.14 scrub ok
Jan 22 13:39:28 compute-1 ceph-mon[81715]: osdmap e125: 3 total, 3 up, 3 in
Jan 22 13:39:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126 pruub=14.590867043s) [0] async=[0] r=-1 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 61'690 active pruub 308.968566895s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:29 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126 pruub=14.590797424s) [0] r=-1 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 0'0 unknown NOTIFY pruub 308.968566895s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:29 compute-1 sudo[90229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgqwmoshpmexsccelquhhwkftrhzjfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089168.9206831-799-280975342978610/AnsiballZ_dnf.py'
Jan 22 13:39:29 compute-1 sudo[90229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:29 compute-1 python3.9[90231]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:39:29 compute-1 ceph-mon[81715]: pgmap v317: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 22 B/s, 1 objects/s recovering
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 8.a deep-scrub starts
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 8.a deep-scrub ok
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 10.5 scrub starts
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 10.5 scrub ok
Jan 22 13:39:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 8.3 scrub starts
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-1 ceph-mon[81715]: 8.3 scrub ok
Jan 22 13:39:29 compute-1 ceph-mon[81715]: osdmap e126: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 22 13:39:29 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 22 13:39:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:39:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:39:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 13:39:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:30.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:30 compute-1 ceph-mon[81715]: 10.18 scrub starts
Jan 22 13:39:30 compute-1 ceph-mon[81715]: 10.18 scrub ok
Jan 22 13:39:30 compute-1 ceph-mon[81715]: pgmap v320: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Jan 22 13:39:30 compute-1 ceph-mon[81715]: 11.1b scrub starts
Jan 22 13:39:30 compute-1 ceph-mon[81715]: 11.1b scrub ok
Jan 22 13:39:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:30 compute-1 ceph-mon[81715]: osdmap e127: 3 total, 3 up, 3 in
Jan 22 13:39:30 compute-1 sudo[90229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 13:39:31 compute-1 sudo[90382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itrnfhuggpqqkhkbruaighrmqrcjfmzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089171.2823322-823-246580322785748/AnsiballZ_file.py'
Jan 22 13:39:31 compute-1 sudo[90382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:31 compute-1 python3.9[90384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:31 compute-1 sudo[90382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:32 compute-1 ceph-mon[81715]: osdmap e128: 3 total, 3 up, 3 in
Jan 22 13:39:32 compute-1 sudo[90534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyazyecafpnaprtxmyenaxcjhynnxazs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089172.0354333-847-4076598009636/AnsiballZ_stat.py'
Jan 22 13:39:32 compute-1 sudo[90534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:32 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Jan 22 13:39:32 compute-1 python3.9[90536]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:32 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Jan 22 13:39:32 compute-1 sudo[90534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:32 compute-1 sudo[90612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqptuyremlbubjiwhgwbqapxqfhqqbek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089172.0354333-847-4076598009636/AnsiballZ_file.py'
Jan 22 13:39:32 compute-1 sudo[90612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:32 compute-1 python3.9[90614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:33 compute-1 sudo[90612]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:33 compute-1 ceph-mon[81715]: pgmap v323: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Jan 22 13:39:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:33 compute-1 ceph-mon[81715]: 11.16 scrub starts
Jan 22 13:39:33 compute-1 ceph-mon[81715]: 11.16 scrub ok
Jan 22 13:39:33 compute-1 ceph-mon[81715]: 8.18 deep-scrub starts
Jan 22 13:39:33 compute-1 ceph-mon[81715]: 8.18 deep-scrub ok
Jan 22 13:39:33 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 22 13:39:33 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 22 13:39:33 compute-1 sudo[90764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcfnihqdupzscqompnwaoqbgpmoauqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089173.404477-886-110165093400577/AnsiballZ_stat.py'
Jan 22 13:39:33 compute-1 sudo[90764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:33 compute-1 python3.9[90766]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:33 compute-1 sudo[90764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:34 compute-1 sudo[90842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auolsmedzhqnxwunrvazspwtovumlagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089173.404477-886-110165093400577/AnsiballZ_file.py'
Jan 22 13:39:34 compute-1 sudo[90842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:34 compute-1 ceph-mon[81715]: 8.17 scrub starts
Jan 22 13:39:34 compute-1 ceph-mon[81715]: 8.17 scrub ok
Jan 22 13:39:34 compute-1 python3.9[90844]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:34 compute-1 sudo[90842]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:39:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:39:35 compute-1 sudo[90994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzpynlqjqyvdwsyimuofruahwrwlvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089174.899292-931-6675074836600/AnsiballZ_dnf.py'
Jan 22 13:39:35 compute-1 sudo[90994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:35 compute-1 ceph-mon[81715]: pgmap v324: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:35 compute-1 ceph-mon[81715]: 8.15 scrub starts
Jan 22 13:39:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:35 compute-1 ceph-mon[81715]: 8.15 scrub ok
Jan 22 13:39:35 compute-1 ceph-mon[81715]: 10.1b scrub starts
Jan 22 13:39:35 compute-1 ceph-mon[81715]: 10.1b scrub ok
Jan 22 13:39:35 compute-1 python3.9[90996]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:39:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.383701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176383839, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7331, "num_deletes": 255, "total_data_size": 14116716, "memory_usage": 14338576, "flush_reason": "Manual Compaction"}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 22 13:39:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176449343, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8798928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7336, "table_properties": {"data_size": 8768075, "index_size": 20178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92032, "raw_average_key_size": 24, "raw_value_size": 8693720, "raw_average_value_size": 2268, "num_data_blocks": 884, "num_entries": 3832, "num_filter_entries": 3832, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 1769088931, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 65713 microseconds, and 18588 cpu microseconds.
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.449428) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8798928 bytes OK
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.449457) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462153) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462210) EVENT_LOG_v1 {"time_micros": 1769089176462198, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14075848, prev total WAL file size 14075848, number of live WAL files 2.
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.465959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8592KB) 8(1648B)]
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176466106, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8800576, "oldest_snapshot_seqno": -1}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3581 keys, 8795436 bytes, temperature: kUnknown
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176528899, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8795436, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8765249, "index_size": 20157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 87854, "raw_average_key_size": 24, "raw_value_size": 8694000, "raw_average_value_size": 2427, "num_data_blocks": 884, "num_entries": 3581, "num_filter_entries": 3581, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.529210) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8795436 bytes
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.530365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.0 rd, 139.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.4, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3837, records dropped: 256 output_compression: NoCompression
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.530391) EVENT_LOG_v1 {"time_micros": 1769089176530379, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62877, "compaction_time_cpu_micros": 19059, "output_level": 6, "num_output_files": 1, "total_output_size": 8795436, "num_input_records": 3837, "num_output_records": 3581, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176531913, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176531965, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 13:39:36 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.465789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:39:36 compute-1 sudo[90994]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:37 compute-1 ceph-mon[81715]: pgmap v325: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:37 compute-1 ceph-mon[81715]: 10.2 scrub starts
Jan 22 13:39:37 compute-1 ceph-mon[81715]: 10.2 scrub ok
Jan 22 13:39:37 compute-1 python3.9[91148]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 13:39:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 13:39:38 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:38 compute-1 ceph-mon[81715]: 11.17 scrub starts
Jan 22 13:39:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:38 compute-1 ceph-mon[81715]: 11.17 scrub ok
Jan 22 13:39:38 compute-1 python3.9[91300]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 13:39:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 22 13:39:38 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 22 13:39:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:39 compute-1 python3.9[91450]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:39 compute-1 ceph-mon[81715]: pgmap v326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 13:39:39 compute-1 ceph-mon[81715]: osdmap e129: 3 total, 3 up, 3 in
Jan 22 13:39:39 compute-1 ceph-mon[81715]: 8.4 scrub starts
Jan 22 13:39:39 compute-1 ceph-mon[81715]: 8.4 scrub ok
Jan 22 13:39:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:40 compute-1 ceph-mon[81715]: pgmap v328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 13:39:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 13:39:40 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 130 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130 pruub=8.937850952s) [2] r=-1 lpr=130 pi=[98,130)/1 crt=62'695 mlcod 0'0 active pruub 314.922821045s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:40 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 130 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130 pruub=8.937788010s) [2] r=-1 lpr=130 pi=[98,130)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 314.922821045s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:40 compute-1 sudo[91600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbogktfdvaswrkrmrpiuhjdtbglomumo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089180.0529966-1054-153792847854037/AnsiballZ_systemd.py'
Jan 22 13:39:40 compute-1 sudo[91600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:41 compute-1 python3.9[91602]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:41 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 13:39:41 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 13:39:41 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 13:39:41 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:39:41 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:39:41 compute-1 sudo[91600]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:42 compute-1 ceph-mon[81715]: 10.19 scrub starts
Jan 22 13:39:42 compute-1 ceph-mon[81715]: 10.19 scrub ok
Jan 22 13:39:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 13:39:42 compute-1 ceph-mon[81715]: osdmap e130: 3 total, 3 up, 3 in
Jan 22 13:39:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 13:39:42 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 131 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:42 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 131 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:42.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:42 compute-1 python3.9[91765]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 13:39:43 compute-1 ceph-mon[81715]: 10.8 scrub starts
Jan 22 13:39:43 compute-1 ceph-mon[81715]: 10.8 scrub ok
Jan 22 13:39:43 compute-1 ceph-mon[81715]: pgmap v330: 305 pgs: 1 unknown, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:43 compute-1 ceph-mon[81715]: osdmap e131: 3 total, 3 up, 3 in
Jan 22 13:39:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 13:39:43 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 132 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] async=[2] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 22 13:39:43 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 22 13:39:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:44 compute-1 ceph-mon[81715]: 8.5 scrub starts
Jan 22 13:39:44 compute-1 ceph-mon[81715]: 8.5 scrub ok
Jan 22 13:39:44 compute-1 ceph-mon[81715]: osdmap e132: 3 total, 3 up, 3 in
Jan 22 13:39:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 13:39:44 compute-1 ceph-mon[81715]: 9.e scrub starts
Jan 22 13:39:44 compute-1 ceph-mon[81715]: 9.e scrub ok
Jan 22 13:39:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 13:39:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=133 pruub=15.142436028s) [0] r=-1 lpr=133 pi=[77,133)/1 crt=62'697 mlcod 0'0 active pruub 324.771209717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133 pruub=14.980648041s) [2] async=[2] r=-1 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 62'695 active pruub 324.609436035s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=133 pruub=15.142356873s) [0] r=-1 lpr=133 pi=[77,133)/1 crt=62'697 mlcod 0'0 unknown NOTIFY pruub 324.771209717s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:44 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133 pruub=14.980477333s) [2] r=-1 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 324.609436035s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:45 compute-1 ceph-mon[81715]: pgmap v333: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 13:39:45 compute-1 ceph-mon[81715]: osdmap e133: 3 total, 3 up, 3 in
Jan 22 13:39:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 13:39:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 134 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:45 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 134 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:45 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 22 13:39:45 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 22 13:39:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:46 compute-1 ceph-mon[81715]: osdmap e134: 3 total, 3 up, 3 in
Jan 22 13:39:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:39:46 compute-1 ceph-mon[81715]: 9.6 scrub starts
Jan 22 13:39:46 compute-1 ceph-mon[81715]: 9.6 scrub ok
Jan 22 13:39:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 13:39:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=135 pruub=12.084430695s) [0] r=-1 lpr=135 pi=[99,135)/1 crt=62'695 mlcod 0'0 active pruub 323.938873291s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=135 pruub=12.084036827s) [0] r=-1 lpr=135 pi=[99,135)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 323.938873291s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:46 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:46 compute-1 sudo[91915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydahnwinhqzdwnsjiyugqsjfejnqozy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089186.335904-1225-147157965244290/AnsiballZ_systemd.py'
Jan 22 13:39:46 compute-1 sudo[91915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:47 compute-1 python3.9[91917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:47 compute-1 sudo[91915]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:47 compute-1 sudo[92069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbduxxpclngvjuznnohrgxvzgzhoimik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089187.2589524-1225-156261029957087/AnsiballZ_systemd.py'
Jan 22 13:39:47 compute-1 sudo[92069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:47 compute-1 python3.9[92071]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:48.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:48 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 22 13:39:48 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 22 13:39:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:48 compute-1 sudo[92069]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-1 sudo[92098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:49 compute-1 sudo[92098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-1 sudo[92098]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-1 sudo[92123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:39:49 compute-1 sudo[92123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-1 sudo[92123]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-1 sudo[92148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:49 compute-1 sudo[92148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-1 sudo[92148]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-1 sudo[92173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:39:49 compute-1 sudo[92173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-1 ceph-mon[81715]: pgmap v336: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s; 164 B/s, 3 objects/s recovering
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 8.f scrub starts
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 8.f scrub ok
Jan 22 13:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:39:49 compute-1 ceph-mon[81715]: osdmap e135: 3 total, 3 up, 3 in
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 8.c scrub starts
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:49 compute-1 ceph-mon[81715]: 8.c scrub ok
Jan 22 13:39:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 13:39:49 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136 pruub=13.113392830s) [0] async=[0] r=-1 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 62'697 active pruub 327.872009277s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:49 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136 pruub=13.113287926s) [0] r=-1 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 0'0 unknown NOTIFY pruub 327.872009277s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:49 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:49 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:49 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Jan 22 13:39:49 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Jan 22 13:39:49 compute-1 podman[92268]: 2026-01-22 13:39:49.898072535 +0000 UTC m=+0.070932458 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:39:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:50 compute-1 podman[92268]: 2026-01-22 13:39:50.018422233 +0000 UTC m=+0.191282156 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:39:50 compute-1 sshd-session[84277]: Connection closed by 192.168.122.30 port 39468
Jan 22 13:39:50 compute-1 sshd-session[84274]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:39:50 compute-1 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Jan 22 13:39:50 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 13:39:50 compute-1 systemd[1]: session-34.scope: Consumed 1min 7.327s CPU time.
Jan 22 13:39:50 compute-1 systemd-logind[787]: Removed session 34.
Jan 22 13:39:50 compute-1 sudo[92173]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:50 compute-1 ceph-mon[81715]: pgmap v338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 16 op/s; 155 B/s, 3 objects/s recovering
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.1 scrub starts
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.1 scrub ok
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 11.13 scrub starts
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 11.13 scrub ok
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.19 scrub starts
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.19 scrub ok
Jan 22 13:39:50 compute-1 ceph-mon[81715]: osdmap e136: 3 total, 3 up, 3 in
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.12 deep-scrub starts
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.12 deep-scrub ok
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.b scrub starts
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-1 ceph-mon[81715]: 9.b scrub ok
Jan 22 13:39:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 13:39:50 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 137 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:50 compute-1 sudo[92392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:50 compute-1 sudo[92392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:50 compute-1 sudo[92392]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 22 13:39:50 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 22 13:39:50 compute-1 sudo[92417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:39:50 compute-1 sudo[92417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:50 compute-1 sudo[92417]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:50 compute-1 sudo[92442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:50 compute-1 sudo[92442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:50 compute-1 sudo[92442]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:50 compute-1 sudo[92467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:39:50 compute-1 sudo[92467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:51 compute-1 sudo[92467]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:51 compute-1 ceph-mon[81715]: pgmap v340: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-1 ceph-mon[81715]: osdmap e137: 3 total, 3 up, 3 in
Jan 22 13:39:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-1 ceph-mon[81715]: 9.a scrub starts
Jan 22 13:39:51 compute-1 ceph-mon[81715]: 9.a scrub ok
Jan 22 13:39:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 13:39:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138 pruub=14.980248451s) [0] async=[0] r=-1 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 62'695 active pruub 331.834564209s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:51 compute-1 ceph-osd[79044]: osd.1 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138 pruub=14.979649544s) [0] r=-1 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 331.834564209s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 22 13:39:51 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 22 13:39:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:52.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:52 compute-1 ceph-mon[81715]: pgmap v342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:52 compute-1 ceph-mon[81715]: osdmap e138: 3 total, 3 up, 3 in
Jan 22 13:39:52 compute-1 ceph-mon[81715]: 9.d scrub starts
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-1 ceph-mon[81715]: 9.d scrub ok
Jan 22 13:39:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:39:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:39:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 13:39:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:53 compute-1 ceph-mon[81715]: osdmap e139: 3 total, 3 up, 3 in
Jan 22 13:39:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:39:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:53.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:54.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:54 compute-1 ceph-mon[81715]: pgmap v345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:55 compute-1 ceph-mon[81715]: 9.3 scrub starts
Jan 22 13:39:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:55 compute-1 ceph-mon[81715]: 9.3 scrub ok
Jan 22 13:39:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:55 compute-1 sshd-session[92524]: Accepted publickey for zuul from 192.168.122.30 port 58766 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:39:55 compute-1 systemd-logind[787]: New session 35 of user zuul.
Jan 22 13:39:55 compute-1 systemd[1]: Started Session 35 of User zuul.
Jan 22 13:39:56 compute-1 sshd-session[92524]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:39:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:39:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:39:56 compute-1 ceph-mon[81715]: 9.1a scrub starts
Jan 22 13:39:56 compute-1 ceph-mon[81715]: 9.1a scrub ok
Jan 22 13:39:56 compute-1 ceph-mon[81715]: pgmap v346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:57 compute-1 python3.9[92677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:39:57 compute-1 ceph-mon[81715]: 9.1b scrub starts
Jan 22 13:39:57 compute-1 ceph-mon[81715]: 9.1b scrub ok
Jan 22 13:39:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:57.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:58 compute-1 sudo[92831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xswdeutnbsgbcudoourehdjyjgoaxpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089197.8456352-69-106007942679143/AnsiballZ_getent.py'
Jan 22 13:39:58 compute-1 sudo[92831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:58.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:58 compute-1 python3.9[92833]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 13:39:58 compute-1 sudo[92831]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 22 13:39:58 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 22 13:39:58 compute-1 ceph-mon[81715]: pgmap v347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:58 compute-1 ceph-mon[81715]: 9.1e scrub starts
Jan 22 13:39:58 compute-1 ceph-mon[81715]: 9.1e scrub ok
Jan 22 13:39:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:39:59 compute-1 sudo[92984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsouvqxiwxculnhzjmutjyelwpegvzhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089199.0520217-105-271855960028451/AnsiballZ_setup.py'
Jan 22 13:39:59 compute-1 sudo[92984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:59 compute-1 sudo[92987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:59 compute-1 sudo[92987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:59 compute-1 sudo[92987]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-1 sudo[93012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:39:59 compute-1 sudo[93012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:59 compute-1 sudo[93012]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-1 python3.9[92986]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:39:59 compute-1 ceph-mon[81715]: 9.f scrub starts
Jan 22 13:39:59 compute-1 ceph-mon[81715]: 9.f scrub ok
Jan 22 13:39:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:59 compute-1 sudo[92984]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:39:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:00 compute-1 sudo[93118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqqhkwoyttnonqcuahewesuhildybxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089199.0520217-105-271855960028451/AnsiballZ_dnf.py'
Jan 22 13:40:00 compute-1 sudo[93118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:00.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:00 compute-1 python3.9[93120]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:40:00 compute-1 ceph-mon[81715]: pgmap v348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 13:40:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 13:40:01 compute-1 anacron[8883]: Job `cron.weekly' started
Jan 22 13:40:01 compute-1 anacron[8883]: Job `cron.weekly' terminated
Jan 22 13:40:01 compute-1 ceph-mon[81715]: 9.1f scrub starts
Jan 22 13:40:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:01 compute-1 ceph-mon[81715]: 9.7 scrub starts
Jan 22 13:40:01 compute-1 ceph-mon[81715]: 9.1f scrub ok
Jan 22 13:40:01 compute-1 ceph-mon[81715]: 9.7 scrub ok
Jan 22 13:40:01 compute-1 sudo[93118]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:01.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:02 compute-1 sudo[93274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlsmjynfedyhrhaejrngpovrvikxndqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089202.297635-147-45728050668087/AnsiballZ_dnf.py'
Jan 22 13:40:02 compute-1 sudo[93274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:02 compute-1 ceph-mon[81715]: pgmap v349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:02 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:02 compute-1 python3.9[93276]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:04 compute-1 sudo[93274]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:04.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:04 compute-1 ceph-mon[81715]: pgmap v350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:04 compute-1 sudo[93427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkfidwqkxnsjalqfmmmiuhmnhqtolanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089204.3238757-171-55947412813953/AnsiballZ_systemd.py'
Jan 22 13:40:04 compute-1 sudo[93427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:05 compute-1 python3.9[93429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:40:05 compute-1 sudo[93427]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:06 compute-1 python3.9[93582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:06 compute-1 sudo[93732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzhbruqzlplhnwlurpouqafrmkjwkzyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089206.457695-225-4238027465805/AnsiballZ_sefcontext.py'
Jan 22 13:40:06 compute-1 sudo[93732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:06 compute-1 ceph-mon[81715]: pgmap v351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:06 compute-1 ceph-mon[81715]: 9.13 scrub starts
Jan 22 13:40:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:06 compute-1 ceph-mon[81715]: 9.13 scrub ok
Jan 22 13:40:07 compute-1 python3.9[93734]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 13:40:07 compute-1 sudo[93732]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:07 compute-1 ceph-mon[81715]: 9.17 scrub starts
Jan 22 13:40:07 compute-1 ceph-mon[81715]: 9.17 scrub ok
Jan 22 13:40:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:08 compute-1 python3.9[93884]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:08.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:08 compute-1 ceph-mon[81715]: pgmap v352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:09 compute-1 sudo[94040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyzecuertaotgcgvcbkmgagdestrqbpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089209.024686-279-99024093702185/AnsiballZ_dnf.py'
Jan 22 13:40:09 compute-1 sudo[94040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:09 compute-1 python3.9[94042]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:09 compute-1 ceph-mon[81715]: 9.5 scrub starts
Jan 22 13:40:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:09 compute-1 ceph-mon[81715]: 9.5 scrub ok
Jan 22 13:40:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:09.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:40:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:40:10 compute-1 ceph-mon[81715]: pgmap v353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:10 compute-1 ceph-mon[81715]: 9.18 scrub starts
Jan 22 13:40:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:10 compute-1 ceph-mon[81715]: 9.18 scrub ok
Jan 22 13:40:11 compute-1 sudo[94040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:11 compute-1 sudo[94193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhhjainicgfjdgshesidpovvfnybioe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089211.2717988-303-123258201381765/AnsiballZ_command.py'
Jan 22 13:40:11 compute-1 sudo[94193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:11 compute-1 python3.9[94195]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:12.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:12 compute-1 sudo[94193]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:13 compute-1 ceph-mon[81715]: pgmap v354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:13 compute-1 ceph-mon[81715]: 9.8 scrub starts
Jan 22 13:40:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:13 compute-1 ceph-mon[81715]: 9.8 scrub ok
Jan 22 13:40:13 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 22 13:40:13 compute-1 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 22 13:40:13 compute-1 sudo[94480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavcvhuiwfbubxvfgmbrzffwxnormrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089212.9609275-327-214394439941955/AnsiballZ_file.py'
Jan 22 13:40:13 compute-1 sudo[94480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:13 compute-1 python3.9[94482]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 13:40:13 compute-1 sudo[94480]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:13.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:14 compute-1 ceph-mon[81715]: 9.15 scrub starts
Jan 22 13:40:14 compute-1 ceph-mon[81715]: 9.15 scrub ok
Jan 22 13:40:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:14.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:14 compute-1 python3.9[94632]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:15 compute-1 sudo[94784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yskbwbdoeysimorqymxrzeawzndywnsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089214.8363013-375-248988971376063/AnsiballZ_dnf.py'
Jan 22 13:40:15 compute-1 sudo[94784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:15 compute-1 ceph-mon[81715]: pgmap v355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:15 compute-1 python3.9[94786]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:15.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:40:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:40:16 compute-1 sudo[94784]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:17 compute-1 sudo[94937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nayauupjtwzkcvboptqmgkbujfktcpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089217.0760584-402-49085220690945/AnsiballZ_dnf.py'
Jan 22 13:40:17 compute-1 sudo[94937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:17 compute-1 ceph-mon[81715]: pgmap v356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:17 compute-1 ceph-mon[81715]: 9.9 scrub starts
Jan 22 13:40:17 compute-1 ceph-mon[81715]: 9.9 scrub ok
Jan 22 13:40:17 compute-1 python3.9[94939]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:17.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:18.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:18 compute-1 sudo[94937]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:19 compute-1 ceph-mon[81715]: pgmap v357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:19 compute-1 sudo[95090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchmjvfxsaeysdoyhaovrinqbogboywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089219.3742638-438-267822973929768/AnsiballZ_stat.py'
Jan 22 13:40:19 compute-1 sudo[95090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:19 compute-1 python3.9[95092]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:19 compute-1 sudo[95090]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:20 compute-1 sudo[95244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wznydxdpolfouqldudvsdnjuehwaokua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089220.0991921-462-200316879719362/AnsiballZ_slurp.py'
Jan 22 13:40:20 compute-1 sudo[95244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:20 compute-1 python3.9[95246]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 13:40:20 compute-1 sudo[95244]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:21 compute-1 ceph-mon[81715]: pgmap v358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:22 compute-1 sshd-session[92527]: Connection closed by 192.168.122.30 port 58766
Jan 22 13:40:22 compute-1 sshd-session[92524]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:40:22 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 13:40:22 compute-1 systemd[1]: session-35.scope: Consumed 18.631s CPU time.
Jan 22 13:40:22 compute-1 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Jan 22 13:40:22 compute-1 systemd-logind[787]: Removed session 35.
Jan 22 13:40:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:40:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:40:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:23 compute-1 ceph-mon[81715]: pgmap v359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:23 compute-1 ceph-mon[81715]: 9.16 scrub starts
Jan 22 13:40:23 compute-1 ceph-mon[81715]: 9.16 scrub ok
Jan 22 13:40:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:24 compute-1 ceph-mon[81715]: pgmap v360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:24 compute-1 ceph-mon[81715]: 9.1d scrub starts
Jan 22 13:40:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:24 compute-1 ceph-mon[81715]: 9.1d scrub ok
Jan 22 13:40:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:26 compute-1 ceph-mon[81715]: pgmap v361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:27 compute-1 sshd-session[95271]: Accepted publickey for zuul from 192.168.122.30 port 55500 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:40:27 compute-1 systemd-logind[787]: New session 36 of user zuul.
Jan 22 13:40:27 compute-1 systemd[1]: Started Session 36 of User zuul.
Jan 22 13:40:27 compute-1 sshd-session[95271]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:40:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:28 compute-1 python3.9[95424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:28 compute-1 ceph-mon[81715]: pgmap v362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:28 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:29 compute-1 python3.9[95578]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:30 compute-1 python3.9[95771]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:31 compute-1 ceph-mon[81715]: pgmap v363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:31 compute-1 sshd-session[95274]: Connection closed by 192.168.122.30 port 55500
Jan 22 13:40:31 compute-1 sshd-session[95271]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:40:31 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Jan 22 13:40:31 compute-1 systemd[1]: session-36.scope: Consumed 2.427s CPU time.
Jan 22 13:40:31 compute-1 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Jan 22 13:40:31 compute-1 systemd-logind[787]: Removed session 36.
Jan 22 13:40:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:32.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:32.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:33 compute-1 ceph-mon[81715]: pgmap v364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:34.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:35 compute-1 ceph-mon[81715]: pgmap v365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:36 compute-1 ceph-mon[81715]: pgmap v366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:37 compute-1 sshd-session[95797]: Accepted publickey for zuul from 192.168.122.30 port 46244 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:40:37 compute-1 systemd-logind[787]: New session 37 of user zuul.
Jan 22 13:40:37 compute-1 systemd[1]: Started Session 37 of User zuul.
Jan 22 13:40:37 compute-1 sshd-session[95797]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:40:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:37 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:38 compute-1 python3.9[95950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:38.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:39 compute-1 ceph-mon[81715]: pgmap v367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:39 compute-1 python3.9[96104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:40.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:40 compute-1 sudo[96258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtqovwgmufcmniixusmxgfwrxputpmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089240.0938692-81-203559553464756/AnsiballZ_setup.py'
Jan 22 13:40:40 compute-1 sudo[96258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:40.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:40 compute-1 python3.9[96260]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:40 compute-1 sudo[96258]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:41 compute-1 ceph-mon[81715]: pgmap v368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:41 compute-1 sudo[96342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jthcfjhggrpfwqfzsleevspwvvxzkryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089240.0938692-81-203559553464756/AnsiballZ_dnf.py'
Jan 22 13:40:41 compute-1 sudo[96342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:41 compute-1 python3.9[96344]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:42.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:42 compute-1 sudo[96342]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:43 compute-1 sudo[96495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlpbrcmnntlmnmrudilotdfcnljaibns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089243.0790102-117-26148950365472/AnsiballZ_setup.py'
Jan 22 13:40:43 compute-1 sudo[96495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:43 compute-1 ceph-mon[81715]: pgmap v369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:43 compute-1 python3.9[96497]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:43 compute-1 sudo[96495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:44.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:40:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:44.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:40:44 compute-1 sudo[96690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdsmdcniqfrmaquvreauzpmjuuqpkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089244.4461582-150-214514909053646/AnsiballZ_file.py'
Jan 22 13:40:44 compute-1 sudo[96690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-1 python3.9[96692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:40:45 compute-1 sudo[96690]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:45 compute-1 sudo[96842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anewpaccvuufterwhlwsfspbswyknuon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089245.3341134-174-7778624481663/AnsiballZ_command.py'
Jan 22 13:40:45 compute-1 sudo[96842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:45 compute-1 ceph-mon[81715]: pgmap v370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-1 python3.9[96844]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:46 compute-1 sudo[96842]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:46.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:46 compute-1 sudo[97007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnctcnwhweigkcfxoznshojudimqipzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089246.3958635-198-9830598477207/AnsiballZ_stat.py'
Jan 22 13:40:46 compute-1 sudo[97007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:46 compute-1 ceph-mon[81715]: pgmap v371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:47 compute-1 python3.9[97009]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:40:47 compute-1 sudo[97007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-1 sudo[97085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyvdusjhlvwoszzfhatlynlsdiyepua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089246.3958635-198-9830598477207/AnsiballZ_file.py'
Jan 22 13:40:47 compute-1 sudo[97085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:47 compute-1 python3.9[97087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:40:47 compute-1 sudo[97085]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:48 compute-1 sudo[97237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiorjqwpfohrtfrtyjlwyairefcnqjmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089247.8933122-234-218943637940440/AnsiballZ_stat.py'
Jan 22 13:40:48 compute-1 sudo[97237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:48 compute-1 python3.9[97239]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:40:48 compute-1 sudo[97237]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:48.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:48 compute-1 sudo[97315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaadohxwrfxanmzvlisnpigvlexodbth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089247.8933122-234-218943637940440/AnsiballZ_file.py'
Jan 22 13:40:48 compute-1 sudo[97315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:48 compute-1 python3.9[97317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:48 compute-1 sudo[97315]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:48 compute-1 ceph-mon[81715]: pgmap v372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:48 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:49 compute-1 sudo[97467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjhwzenopjkkgxokmomsbxvvhpxofuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089249.320302-273-195342292985113/AnsiballZ_ini_file.py'
Jan 22 13:40:49 compute-1 sudo[97467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:49 compute-1 python3.9[97469]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:49 compute-1 sudo[97467]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:50 compute-1 sudo[97619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrihtrndqvooxzpunuqkutafwzajjny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089250.0657625-273-218413105041933/AnsiballZ_ini_file.py'
Jan 22 13:40:50 compute-1 sudo[97619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:50 compute-1 python3.9[97621]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:50 compute-1 sudo[97619]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:50.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:50 compute-1 sudo[97771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khfasxqkchgptvrfykqautejvtoqbtrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089250.706444-273-195428513047763/AnsiballZ_ini_file.py'
Jan 22 13:40:50 compute-1 sudo[97771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:51 compute-1 ceph-mon[81715]: pgmap v373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:51 compute-1 python3.9[97773]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:51 compute-1 sudo[97771]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:51 compute-1 sudo[97923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgikrraqjhymmwzqsiihunyayambcjrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089251.3487206-273-110242955812953/AnsiballZ_ini_file.py'
Jan 22 13:40:51 compute-1 sudo[97923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:51 compute-1 python3.9[97925]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:51 compute-1 sudo[97923]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:52 compute-1 sudo[98075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmnkugcqczdcmufomqtkmapzyhobasye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089252.5287385-366-193073408736276/AnsiballZ_dnf.py'
Jan 22 13:40:52 compute-1 sudo[98075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:53 compute-1 ceph-mon[81715]: pgmap v374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:53 compute-1 python3.9[98077]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:54.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:54 compute-1 sudo[98075]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:54.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:55 compute-1 ceph-mon[81715]: pgmap v375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:55 compute-1 sudo[98228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiouyzpnqhdgfmnbeevimzggunpyzjhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089255.034691-399-59413932215215/AnsiballZ_setup.py'
Jan 22 13:40:55 compute-1 sudo[98228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:55 compute-1 python3.9[98230]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:55 compute-1 sudo[98228]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:56 compute-1 sudo[98382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrsxwfthykbebraalehfdeqqtvucyqct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089255.995074-423-279736036171607/AnsiballZ_stat.py'
Jan 22 13:40:56 compute-1 sudo[98382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:56 compute-1 python3.9[98384]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:56 compute-1 sudo[98382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:56.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:57 compute-1 sudo[98534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikpaibfdonlbiqflthmsvaymovfedzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089256.8327503-450-179415954588869/AnsiballZ_stat.py'
Jan 22 13:40:57 compute-1 sudo[98534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:57 compute-1 ceph-mon[81715]: pgmap v376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:57 compute-1 python3.9[98536]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:57 compute-1 sudo[98534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:58 compute-1 sudo[98686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orcikboshhkhahimbwtbweelzvmseoux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089257.7366517-480-123236980228253/AnsiballZ_command.py'
Jan 22 13:40:58 compute-1 sudo[98686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:58 compute-1 python3.9[98688]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:58 compute-1 sudo[98686]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:40:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:58.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:59 compute-1 sudo[98839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipoevspozjvadfqvhqpvqlvxqbstfmtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089258.6029294-510-83742801838532/AnsiballZ_service_facts.py'
Jan 22 13:40:59 compute-1 sudo[98839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:59 compute-1 ceph-mon[81715]: pgmap v377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:59 compute-1 python3.9[98841]: ansible-service_facts Invoked
Jan 22 13:40:59 compute-1 network[98858]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:40:59 compute-1 network[98859]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:40:59 compute-1 network[98860]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:41:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:00.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:00 compute-1 sudo[98866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:00 compute-1 sudo[98866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-1 sudo[98866]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-1 sudo[98892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:41:00 compute-1 sudo[98892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-1 sudo[98892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-1 sudo[98920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:00 compute-1 sudo[98920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-1 sudo[98920]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-1 sudo[98948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:41:00 compute-1 sudo[98948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:00.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:00 compute-1 podman[99068]: 2026-01-22 13:41:00.848129661 +0000 UTC m=+0.063039800 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:41:00 compute-1 podman[99068]: 2026-01-22 13:41:00.945974549 +0000 UTC m=+0.160884698 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 13:41:01 compute-1 sudo[98948]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-1 ceph-mon[81715]: pgmap v378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:01 compute-1 sudo[99221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:01 compute-1 sudo[99221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-1 sudo[99221]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-1 sudo[99249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:41:01 compute-1 sudo[99249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-1 sudo[99249]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-1 sudo[99277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:01 compute-1 sudo[99277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-1 sudo[99277]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-1 sudo[99306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:41:01 compute-1 sudo[99306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:02 compute-1 sudo[99306]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:02 compute-1 sudo[98839]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:02.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:03 compute-1 ceph-mon[81715]: pgmap v379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:03 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:41:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:41:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:41:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:03 compute-1 sudo[99561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrukbprknhreukzexaupgimfrimvowz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769089263.6029267-555-145276032972521/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769089263.6029267-555-145276032972521/args'
Jan 22 13:41:03 compute-1 sudo[99561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:04 compute-1 sudo[99561]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:04.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:04 compute-1 sudo[99728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruudjhrrvnlkmrsygktedhxgfwvqwxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089264.5296788-589-11664956426289/AnsiballZ_dnf.py'
Jan 22 13:41:04 compute-1 sudo[99728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:05 compute-1 python3.9[99730]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:41:05 compute-1 ceph-mon[81715]: pgmap v380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:06 compute-1 sudo[99728]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:06.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:06 compute-1 ceph-mon[81715]: pgmap v381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:07 compute-1 sudo[99881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpozslgoiqwkqqzzmuxatcbgdjfrccii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089267.0521314-628-261642277735641/AnsiballZ_package_facts.py'
Jan 22 13:41:07 compute-1 sudo[99881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:08.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:08 compute-1 python3.9[99883]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 13:41:08 compute-1 sudo[99881]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:08.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:08 compute-1 ceph-mon[81715]: pgmap v382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:09 compute-1 sudo[100033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovsjezsrpdtdvexahxadveylhuupxjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089268.9872901-657-175501791820816/AnsiballZ_stat.py'
Jan 22 13:41:09 compute-1 sudo[100033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:09 compute-1 python3.9[100035]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:09 compute-1 sudo[100033]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:09 compute-1 sudo[100111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovofznnwafzjplspeendwhvopxdaagvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089268.9872901-657-175501791820816/AnsiballZ_file.py'
Jan 22 13:41:09 compute-1 sudo[100111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:09 compute-1 sudo[100114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:09 compute-1 sudo[100114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:09 compute-1 sudo[100114]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:09 compute-1 python3.9[100113]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:09 compute-1 sudo[100111]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:09 compute-1 sudo[100139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:41:10 compute-1 sudo[100139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:10 compute-1 sudo[100139]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:10.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:10.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:10 compute-1 sudo[100313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vricydjrnerxhrisawbpjxudrlgoezvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089270.4099762-694-224104414228884/AnsiballZ_stat.py'
Jan 22 13:41:10 compute-1 sudo[100313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:10 compute-1 python3.9[100315]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:10 compute-1 ceph-mon[81715]: pgmap v383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:10 compute-1 sudo[100313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:11 compute-1 sudo[100391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwsshmnbcjwsiyditalrmfkdhvavxhfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089270.4099762-694-224104414228884/AnsiballZ_file.py'
Jan 22 13:41:11 compute-1 sudo[100391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:11 compute-1 python3.9[100393]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:11 compute-1 sudo[100391]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:12.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:12 compute-1 ceph-mon[81715]: pgmap v384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:13 compute-1 sudo[100543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmxkxjrvpkuxdusghzpvbruygcauoho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089272.6534932-747-84654081052490/AnsiballZ_lineinfile.py'
Jan 22 13:41:13 compute-1 sudo[100543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:13 compute-1 python3.9[100545]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:13 compute-1 sudo[100543]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:14 compute-1 sudo[100695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmowyrconhqulmysyfwdlzwzmkvctht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089274.5716398-793-185931065211166/AnsiballZ_setup.py'
Jan 22 13:41:14 compute-1 sudo[100695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:15 compute-1 ceph-mon[81715]: pgmap v385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:15 compute-1 python3.9[100697]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:41:15 compute-1 sudo[100695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:16 compute-1 sudo[100780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujsxlytengpbkxydujrlvobvtcpgxmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089274.5716398-793-185931065211166/AnsiballZ_systemd.py'
Jan 22 13:41:16 compute-1 sudo[100780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:16 compute-1 python3.9[100782]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:16 compute-1 sudo[100780]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:17 compute-1 sshd-session[95800]: Connection closed by 192.168.122.30 port 46244
Jan 22 13:41:17 compute-1 sshd-session[95797]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:41:17 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 13:41:17 compute-1 systemd[1]: session-37.scope: Consumed 23.526s CPU time.
Jan 22 13:41:17 compute-1 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Jan 22 13:41:17 compute-1 systemd-logind[787]: Removed session 37.
Jan 22 13:41:17 compute-1 ceph-mon[81715]: pgmap v386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:18.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:18 compute-1 ceph-mon[81715]: pgmap v387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:20 compute-1 ceph-mon[81715]: pgmap v388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:22.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:22 compute-1 sshd-session[100809]: Accepted publickey for zuul from 192.168.122.30 port 59532 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:41:22 compute-1 systemd-logind[787]: New session 38 of user zuul.
Jan 22 13:41:22 compute-1 systemd[1]: Started Session 38 of User zuul.
Jan 22 13:41:22 compute-1 sshd-session[100809]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:41:23 compute-1 sudo[100962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchurwmnrqmvvbxxqjtssiujecwobipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089283.034271-27-201448608270909/AnsiballZ_file.py'
Jan 22 13:41:23 compute-1 sudo[100962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:23 compute-1 python3.9[100964]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:23 compute-1 sudo[100962]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:23 compute-1 ceph-mon[81715]: pgmap v389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:24.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:24 compute-1 sudo[101114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdhhdsibeacmjtijqzoakisyjvhggnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089283.8811638-63-3953338610129/AnsiballZ_stat.py'
Jan 22 13:41:24 compute-1 sudo[101114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:24 compute-1 python3.9[101116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:24 compute-1 sudo[101114]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:24 compute-1 ceph-mon[81715]: pgmap v390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:24 compute-1 sudo[101192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rolvjlsjwpfoczriqdzblkrgcwpcntha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089283.8811638-63-3953338610129/AnsiballZ_file.py'
Jan 22 13:41:24 compute-1 sudo[101192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:24 compute-1 python3.9[101194]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:25 compute-1 sudo[101192]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:25 compute-1 sshd-session[100812]: Connection closed by 192.168.122.30 port 59532
Jan 22 13:41:25 compute-1 sshd-session[100809]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:41:25 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 13:41:25 compute-1 systemd[1]: session-38.scope: Consumed 1.539s CPU time.
Jan 22 13:41:25 compute-1 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Jan 22 13:41:25 compute-1 systemd-logind[787]: Removed session 38.
Jan 22 13:41:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:26.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:26 compute-1 ceph-mon[81715]: pgmap v391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:28.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:28.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:29 compute-1 ceph-mon[81715]: pgmap v392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:30.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:31 compute-1 sshd-session[101220]: Accepted publickey for zuul from 192.168.122.30 port 40192 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:41:31 compute-1 systemd-logind[787]: New session 39 of user zuul.
Jan 22 13:41:31 compute-1 systemd[1]: Started Session 39 of User zuul.
Jan 22 13:41:31 compute-1 sshd-session[101220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:41:31 compute-1 ceph-mon[81715]: pgmap v393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:32 compute-1 python3.9[101373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:41:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:33 compute-1 ceph-mon[81715]: pgmap v394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:33 compute-1 sudo[101527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qinrvjoobfdlyktbvyscrsmrkwkzwquh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089293.2200391-61-138138996231203/AnsiballZ_file.py'
Jan 22 13:41:33 compute-1 sudo[101527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:33 compute-1 python3.9[101529]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:33 compute-1 sudo[101527]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:34 compute-1 sudo[101702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpudmoppqtacdzcxbmxafldrzswpcna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089294.171839-85-104696751554181/AnsiballZ_stat.py'
Jan 22 13:41:34 compute-1 sudo[101702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:34 compute-1 python3.9[101704]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:34 compute-1 sudo[101702]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:35 compute-1 sudo[101780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnobswzbxuchqwnsdizpfjnmdftyyify ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089294.171839-85-104696751554181/AnsiballZ_file.py'
Jan 22 13:41:35 compute-1 sudo[101780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:35 compute-1 python3.9[101782]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ljcifukg recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:35 compute-1 sudo[101780]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:35 compute-1 ceph-mon[81715]: pgmap v395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:36 compute-1 sudo[101932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhyawqteoskgzedkzsybcgalkklnstgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089296.207406-145-80949097015744/AnsiballZ_stat.py'
Jan 22 13:41:36 compute-1 sudo[101932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:36 compute-1 python3.9[101934]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:36 compute-1 sudo[101932]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:36 compute-1 ceph-mon[81715]: pgmap v396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:37 compute-1 sudo[102010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcguuevixgwmaruosvplnoqdcyjubauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089296.207406-145-80949097015744/AnsiballZ_file.py'
Jan 22 13:41:37 compute-1 sudo[102010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:37 compute-1 python3.9[102012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.0wtvek3s recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:37 compute-1 sudo[102010]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:37 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:37 compute-1 sudo[102162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcrnxojsgieyokmoxwctrytsvqjnhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089297.6573005-184-109219406358184/AnsiballZ_file.py'
Jan 22 13:41:37 compute-1 sudo[102162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:38.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:38 compute-1 python3.9[102164]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:38 compute-1 sudo[102162]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:38 compute-1 sudo[102314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvyezfbqxrbommkcfeuqhugjabapuafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089298.3976462-208-16940069100625/AnsiballZ_stat.py'
Jan 22 13:41:38 compute-1 sudo[102314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:38 compute-1 python3.9[102316]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:38 compute-1 sudo[102314]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:38 compute-1 ceph-mon[81715]: pgmap v397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:39 compute-1 sudo[102392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqzvpfqakcggsjggcngmeuxsnndypctb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089298.3976462-208-16940069100625/AnsiballZ_file.py'
Jan 22 13:41:39 compute-1 sudo[102392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:39 compute-1 python3.9[102394]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:39 compute-1 sudo[102392]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:39 compute-1 sudo[102544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonvoxctwyukvfnokepamniqnoeysvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089299.4868512-208-269126472158035/AnsiballZ_stat.py'
Jan 22 13:41:39 compute-1 sudo[102544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:40 compute-1 python3.9[102546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:40 compute-1 sudo[102544]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:40 compute-1 sudo[102622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaghtdgygzzxzmbrloivgwjbmvcxpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089299.4868512-208-269126472158035/AnsiballZ_file.py'
Jan 22 13:41:40 compute-1 sudo[102622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:40 compute-1 python3.9[102624]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:40 compute-1 sudo[102622]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:41 compute-1 sudo[102774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgrhihnwreuisoddvnupnxdsyivjjiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.0145557-277-55133423976902/AnsiballZ_file.py'
Jan 22 13:41:41 compute-1 sudo[102774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:41 compute-1 ceph-mon[81715]: pgmap v398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:41 compute-1 python3.9[102776]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:41 compute-1 sudo[102774]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:42 compute-1 sudo[102926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bloiuaqixqjlysqrjvayriecdpyjzwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.8626678-301-278060133927901/AnsiballZ_stat.py'
Jan 22 13:41:42 compute-1 sudo[102926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:42 compute-1 python3.9[102928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:42 compute-1 sudo[102926]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:42 compute-1 sudo[103004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzvmtvhlrhbpbkwdtohpmbnqllfxdru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.8626678-301-278060133927901/AnsiballZ_file.py'
Jan 22 13:41:42 compute-1 sudo[103004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:43 compute-1 python3.9[103006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:43 compute-1 sudo[103004]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:43 compute-1 ceph-mon[81715]: pgmap v399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:43 compute-1 sudo[103156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqmlpdsppwaboshsjtqjbibxzofapwlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089303.3168716-337-103300535061567/AnsiballZ_stat.py'
Jan 22 13:41:43 compute-1 sudo[103156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:43 compute-1 python3.9[103158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:43 compute-1 sudo[103156]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:44 compute-1 sudo[103234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jslcskiacokvptwxadmxevbbtlyyolso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089303.3168716-337-103300535061567/AnsiballZ_file.py'
Jan 22 13:41:44 compute-1 sudo[103234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:44 compute-1 python3.9[103236]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:44 compute-1 sudo[103234]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:45 compute-1 sudo[103386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezntzsflhkioffqpspwtwgjutrzsuzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089304.7690547-373-206650150463198/AnsiballZ_systemd.py'
Jan 22 13:41:45 compute-1 sudo[103386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:45 compute-1 ceph-mon[81715]: pgmap v400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:45 compute-1 python3.9[103388]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:45 compute-1 systemd[1]: Reloading.
Jan 22 13:41:45 compute-1 systemd-rc-local-generator[103411]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:41:45 compute-1 systemd-sysv-generator[103414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:41:46 compute-1 sudo[103386]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:46.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:46 compute-1 sudo[103575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmaohqfjqopoxtjfjivvqvsuvgtzanpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089306.4835346-397-211344499276593/AnsiballZ_stat.py'
Jan 22 13:41:46 compute-1 sudo[103575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:46 compute-1 python3.9[103577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:46 compute-1 sudo[103575]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:47 compute-1 sudo[103653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkfjaxppwetatdkubnrrkaqkyudwdeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089306.4835346-397-211344499276593/AnsiballZ_file.py'
Jan 22 13:41:47 compute-1 sudo[103653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:47 compute-1 python3.9[103655]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:47 compute-1 sudo[103653]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:47 compute-1 ceph-mon[81715]: pgmap v401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:47 compute-1 sudo[103805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwagcggfnqehyxvqahcaktngirajmri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089307.6756308-433-33355280963577/AnsiballZ_stat.py'
Jan 22 13:41:47 compute-1 sudo[103805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:41:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:41:48 compute-1 python3.9[103807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:48 compute-1 sudo[103805]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:48 compute-1 sudo[103883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljabkqeqnuaxyoijjgetpcaestrlhwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089307.6756308-433-33355280963577/AnsiballZ_file.py'
Jan 22 13:41:48 compute-1 sudo[103883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:48 compute-1 ceph-mon[81715]: pgmap v402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:48 compute-1 python3.9[103885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:48 compute-1 sudo[103883]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:49 compute-1 sudo[104035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znovyzfwkzacoaffjpfnukjcylsiydds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089309.070727-469-74999186463127/AnsiballZ_systemd.py'
Jan 22 13:41:49 compute-1 sudo[104035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:49 compute-1 python3.9[104037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:49 compute-1 systemd[1]: Reloading.
Jan 22 13:41:49 compute-1 systemd-rc-local-generator[104067]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:41:49 compute-1 systemd-sysv-generator[104071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:41:49 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:50 compute-1 systemd[1]: Starting Create netns directory...
Jan 22 13:41:50 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:41:50 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:41:50 compute-1 systemd[1]: Finished Create netns directory.
Jan 22 13:41:50 compute-1 sudo[104035]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:50 compute-1 ceph-mon[81715]: pgmap v403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:50 compute-1 python3.9[104231]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:41:51 compute-1 network[104248]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:41:51 compute-1 network[104249]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:41:51 compute-1 network[104250]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:41:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:52 compute-1 ceph-mon[81715]: pgmap v404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:54 compute-1 ceph-mon[81715]: pgmap v405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:57 compute-1 ceph-mon[81715]: pgmap v406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:57 compute-1 sudo[104510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsrpfxowfrjdukfcyllxypgekgxggzaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089317.5469363-547-198522643832114/AnsiballZ_stat.py'
Jan 22 13:41:57 compute-1 sudo[104510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:58 compute-1 python3.9[104512]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:58 compute-1 sudo[104510]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:58 compute-1 sudo[104588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgprtrcqruqqovdgkhsgpdevyrzgojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089317.5469363-547-198522643832114/AnsiballZ_file.py'
Jan 22 13:41:58 compute-1 sudo[104588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:58 compute-1 python3.9[104590]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:58 compute-1 sudo[104588]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:41:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:59 compute-1 sudo[104740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydehhwfmyzchjcpxjskcxlvnffgbvyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089318.9507554-586-36696303501553/AnsiballZ_file.py'
Jan 22 13:41:59 compute-1 sudo[104740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:59 compute-1 ceph-mon[81715]: pgmap v407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:59 compute-1 python3.9[104742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:59 compute-1 sudo[104740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:00 compute-1 sudo[104892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lksmdzkzvyjyakakytyhoqlgulwehtze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089319.9466505-610-223913349296296/AnsiballZ_stat.py'
Jan 22 13:42:00 compute-1 sudo[104892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:00 compute-1 python3.9[104894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:00 compute-1 sudo[104892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:00 compute-1 ceph-mon[81715]: pgmap v408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:00 compute-1 sudo[104970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukbermzuerufpfuwltmpfhamumamaqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089319.9466505-610-223913349296296/AnsiballZ_file.py'
Jan 22 13:42:00 compute-1 sudo[104970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:00 compute-1 python3.9[104972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:00 compute-1 sudo[104970]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:02 compute-1 sudo[105122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrkkbsmuhtlqkkrogtcjmwspngczhyxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089321.6927633-655-236283998455745/AnsiballZ_timezone.py'
Jan 22 13:42:02 compute-1 sudo[105122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:02 compute-1 python3.9[105124]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 13:42:02 compute-1 systemd[1]: Starting Time & Date Service...
Jan 22 13:42:02 compute-1 systemd[1]: Started Time & Date Service.
Jan 22 13:42:02 compute-1 sudo[105122]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:02 compute-1 ceph-mon[81715]: pgmap v409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:03 compute-1 sudo[105278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbnqtgvqqvxovkhefnmcwkvxzhtqquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089323.3605745-682-124699377726917/AnsiballZ_file.py'
Jan 22 13:42:03 compute-1 sudo[105278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:03 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:03 compute-1 python3.9[105280]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:03 compute-1 sudo[105278]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:04 compute-1 sudo[105430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oouvxcnbupspjnhysgxzoqumgzytnfks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089324.1497335-706-268274683664100/AnsiballZ_stat.py'
Jan 22 13:42:04 compute-1 sudo[105430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:04 compute-1 python3.9[105432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:04 compute-1 sudo[105430]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:04 compute-1 ceph-mon[81715]: pgmap v410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:05 compute-1 sudo[105508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-devcvonvhlzkiqznreunqolyhxuelacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089324.1497335-706-268274683664100/AnsiballZ_file.py'
Jan 22 13:42:05 compute-1 sudo[105508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:05 compute-1 python3.9[105510]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:05 compute-1 sudo[105508]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:05 compute-1 sudo[105660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfdvwlupzkopksmiospowsstebfoovgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089325.626639-742-153067474215046/AnsiballZ_stat.py'
Jan 22 13:42:05 compute-1 sudo[105660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:06 compute-1 python3.9[105662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:06 compute-1 sudo[105660]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:06 compute-1 sudo[105738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuraoliuixjiyiswsjzqkllbmcykyttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089325.626639-742-153067474215046/AnsiballZ_file.py'
Jan 22 13:42:06 compute-1 sudo[105738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:06 compute-1 python3.9[105740]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xyla197f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:06 compute-1 sudo[105738]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:06 compute-1 ceph-mon[81715]: pgmap v411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:07 compute-1 sudo[105890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysuhkrbryqwttqwlwyrndguymyjbvgaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089327.0888321-778-3488169605983/AnsiballZ_stat.py'
Jan 22 13:42:07 compute-1 sudo[105890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:07 compute-1 python3.9[105892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:07 compute-1 sudo[105890]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:07 compute-1 sudo[105968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwygdjobzmqpleqsjzrmhrkyvhdgcbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089327.0888321-778-3488169605983/AnsiballZ_file.py'
Jan 22 13:42:07 compute-1 sudo[105968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:08 compute-1 python3.9[105970]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:08 compute-1 sudo[105968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:42:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:42:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:08.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:08 compute-1 ceph-mon[81715]: pgmap v412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:08 compute-1 sudo[106120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqtosralijvkhmdkfylfbwgdqampuhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089328.3973927-817-95357506355559/AnsiballZ_command.py'
Jan 22 13:42:08 compute-1 sudo[106120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:09 compute-1 python3.9[106122]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:09 compute-1 sudo[106120]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:09 compute-1 sudo[106273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzmuwodqabzojomkyvqwfertewegtnar ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089329.4081001-841-123901625695796/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:42:09 compute-1 sudo[106273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:10.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:10 compute-1 python3[106275]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:42:10 compute-1 sudo[106273]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-1 sudo[106276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:10 compute-1 sudo[106276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-1 sudo[106276]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-1 sudo[106325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:42:10 compute-1 sudo[106325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-1 sudo[106325]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-1 sudo[106350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:10 compute-1 sudo[106350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-1 sudo[106350]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-1 sudo[106402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:42:10 compute-1 sudo[106402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:10 compute-1 sudo[106589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrusmtgyncfkjjoxtqapcfiduyowzobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089330.3863466-865-116418976097152/AnsiballZ_stat.py'
Jan 22 13:42:10 compute-1 sudo[106589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:10 compute-1 ceph-mon[81715]: pgmap v413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:10 compute-1 podman[106597]: 2026-01-22 13:42:10.914717582 +0000 UTC m=+0.056298634 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:42:11 compute-1 podman[106597]: 2026-01-22 13:42:11.0100404 +0000 UTC m=+0.151621452 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:42:11 compute-1 python3.9[106596]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:11 compute-1 sudo[106589]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 sudo[106758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfdnrwslixdshfbpkjkvujypcflpaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089330.3863466-865-116418976097152/AnsiballZ_file.py'
Jan 22 13:42:11 compute-1 sudo[106758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:11 compute-1 sudo[106402]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 python3.9[106762]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:11 compute-1 sudo[106758]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 sudo[106793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:11 compute-1 sudo[106793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:11 compute-1 sudo[106793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 sudo[106841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:42:11 compute-1 sudo[106841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:11 compute-1 sudo[106841]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 sudo[106867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:11 compute-1 sudo[106867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:11 compute-1 sudo[106867]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-1 sudo[106892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:42:11 compute-1 sudo[106892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:12 compute-1 sudo[106892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:12 compute-1 sudo[107074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkzftnmnhduwihsiohsrhhompjfqiyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089332.0151615-901-144889858026480/AnsiballZ_stat.py'
Jan 22 13:42:12 compute-1 sudo[107074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:13 compute-1 python3.9[107076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:13 compute-1 sudo[107074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-1 sudo[107199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnebccqjuhpkylacaepwztapuecegql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089332.0151615-901-144889858026480/AnsiballZ_copy.py'
Jan 22 13:42:13 compute-1 sudo[107199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:13 compute-1 python3.9[107201]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089332.0151615-901-144889858026480/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:13 compute-1 sudo[107199]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:14 compute-1 ceph-mon[81715]: pgmap v414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:42:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:42:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:42:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:14 compute-1 sudo[107351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvqsbzhmwuzlkzpxkxafzdpicdnxpzzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089333.903677-946-238593706931710/AnsiballZ_stat.py'
Jan 22 13:42:14 compute-1 sudo[107351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:14 compute-1 python3.9[107353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:14 compute-1 sudo[107351]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:15 compute-1 sudo[107429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokbfsxahhgdlaclgkeeasrfojcwufdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089333.903677-946-238593706931710/AnsiballZ_file.py'
Jan 22 13:42:15 compute-1 sudo[107429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:15 compute-1 python3.9[107431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:15 compute-1 sudo[107429]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:15 compute-1 ceph-mon[81715]: pgmap v415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:15 compute-1 sudo[107581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaoqzeofmpxictenfcvmpiiggzwhjpsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089335.6162362-982-176616150753074/AnsiballZ_stat.py'
Jan 22 13:42:15 compute-1 sudo[107581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:16 compute-1 python3.9[107583]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:16 compute-1 sudo[107581]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.388376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336388452, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2585, "num_deletes": 251, "total_data_size": 5255458, "memory_usage": 5338544, "flush_reason": "Manual Compaction"}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336407079, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3384523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7341, "largest_seqno": 9921, "table_properties": {"data_size": 3374668, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25170, "raw_average_key_size": 21, "raw_value_size": 3352581, "raw_average_value_size": 2826, "num_data_blocks": 255, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089177, "oldest_key_time": 1769089177, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 18772 microseconds, and 9009 cpu microseconds.
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.407164) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3384523 bytes OK
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.407188) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408891) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408908) EVENT_LOG_v1 {"time_micros": 1769089336408903, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5243460, prev total WAL file size 5243460, number of live WAL files 2.
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.410171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3305KB)], [15(8589KB)]
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336410300, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12179959, "oldest_snapshot_seqno": -1}
Jan 22 13:42:16 compute-1 sudo[107659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjgfesfnetzlnzbfgarqmrkxlgmjwdsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089335.6162362-982-176616150753074/AnsiballZ_file.py'
Jan 22 13:42:16 compute-1 sudo[107659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4244 keys, 10523929 bytes, temperature: kUnknown
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336500320, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10523929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10489598, "index_size": 22637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103740, "raw_average_key_size": 24, "raw_value_size": 10406832, "raw_average_value_size": 2452, "num_data_blocks": 980, "num_entries": 4244, "num_filter_entries": 4244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.500758) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10523929 bytes
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.502349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 4767, records dropped: 523 output_compression: NoCompression
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.502367) EVENT_LOG_v1 {"time_micros": 1769089336502358, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90223, "compaction_time_cpu_micros": 25243, "output_level": 6, "num_output_files": 1, "total_output_size": 10523929, "num_input_records": 4767, "num_output_records": 4244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336503559, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336505304, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.410017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-1 python3.9[107661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:16.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:16 compute-1 sudo[107659]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:17 compute-1 sudo[107811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phqubuhaamxgbywezpxfurdxyatnxuoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089337.1020894-1018-119502950798529/AnsiballZ_stat.py'
Jan 22 13:42:17 compute-1 sudo[107811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:17 compute-1 python3.9[107813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:17 compute-1 sudo[107811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:17 compute-1 ceph-mon[81715]: pgmap v416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:17 compute-1 sudo[107889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuzlrpqzczxssgspypsrlrdzhnmkzht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089337.1020894-1018-119502950798529/AnsiballZ_file.py'
Jan 22 13:42:17 compute-1 sudo[107889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:18 compute-1 python3.9[107891]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:18 compute-1 sudo[107889]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:18 compute-1 ceph-mon[81715]: pgmap v417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:19 compute-1 sudo[108041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xarljgodlvstpouneimpidmgutvezlli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089338.8258023-1057-105329840614988/AnsiballZ_command.py'
Jan 22 13:42:19 compute-1 sudo[108041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:19 compute-1 python3.9[108043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:19 compute-1 sudo[108041]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:20 compute-1 sudo[108196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egajkjxholzybyvxykvncuapnoyeqqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089339.697641-1081-102740030543479/AnsiballZ_blockinfile.py'
Jan 22 13:42:20 compute-1 sudo[108196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:42:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:42:20 compute-1 python3.9[108198]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:20 compute-1 sudo[108196]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:20.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:20 compute-1 sudo[108290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:20 compute-1 sudo[108290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:20 compute-1 sudo[108290]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:20 compute-1 sudo[108323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:42:20 compute-1 sudo[108323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:20 compute-1 sudo[108323]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:21 compute-1 sudo[108398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvptczdgxkgehvygzaqydgxnnijpcgmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089340.7641876-1108-113141861908334/AnsiballZ_file.py'
Jan 22 13:42:21 compute-1 sudo[108398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:21 compute-1 ceph-mon[81715]: pgmap v418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:21 compute-1 python3.9[108400]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:21 compute-1 sudo[108398]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:21 compute-1 sudo[108550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eksrtosfhzgtkvtllnzpherstdrillva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089341.4994369-1108-212196967944224/AnsiballZ_file.py'
Jan 22 13:42:21 compute-1 sudo[108550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:22 compute-1 python3.9[108552]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:22 compute-1 sudo[108550]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:22.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.760610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342760894, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 338, "num_deletes": 250, "total_data_size": 247179, "memory_usage": 253656, "flush_reason": "Manual Compaction"}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 22 13:42:22 compute-1 sudo[108702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkvcxkwbrcijrpswecmqtycnzbxbmisp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089342.4104955-1153-110124926216835/AnsiballZ_mount.py'
Jan 22 13:42:22 compute-1 sudo[108702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342925805, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 162536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9926, "largest_seqno": 10259, "table_properties": {"data_size": 160358, "index_size": 342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5649, "raw_average_key_size": 19, "raw_value_size": 156071, "raw_average_value_size": 534, "num_data_blocks": 14, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089337, "oldest_key_time": 1769089337, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 165413 microseconds, and 1771 cpu microseconds.
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.926031) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 162536 bytes OK
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.926098) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938491) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938541) EVENT_LOG_v1 {"time_micros": 1769089342938529, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 244803, prev total WAL file size 244803, number of live WAL files 2.
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.939445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(158KB)], [18(10MB)]
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342939486, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10686465, "oldest_snapshot_seqno": -1}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4026 keys, 7892140 bytes, temperature: kUnknown
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342981631, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7892140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862824, "index_size": 18134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99787, "raw_average_key_size": 24, "raw_value_size": 7787312, "raw_average_value_size": 1934, "num_data_blocks": 782, "num_entries": 4026, "num_filter_entries": 4026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.981971) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7892140 bytes
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.983719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.8 rd, 186.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(114.3) write-amplify(48.6) OK, records in: 4536, records dropped: 510 output_compression: NoCompression
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.983767) EVENT_LOG_v1 {"time_micros": 1769089342983747, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42272, "compaction_time_cpu_micros": 20477, "output_level": 6, "num_output_files": 1, "total_output_size": 7892140, "num_input_records": 4536, "num_output_records": 4026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342984027, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342985900, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.939352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:23 compute-1 python3.9[108704]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:42:23 compute-1 sudo[108702]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:23 compute-1 sudo[108854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvmrjpsxuvcuynfnftavsqorwpbuzpyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089343.2518406-1153-61719923684869/AnsiballZ_mount.py'
Jan 22 13:42:23 compute-1 ceph-mon[81715]: pgmap v419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:23 compute-1 sudo[108854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:23 compute-1 python3.9[108856]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:42:23 compute-1 sudo[108854]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:24 compute-1 sshd-session[101223]: Connection closed by 192.168.122.30 port 40192
Jan 22 13:42:24 compute-1 sshd-session[101220]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:42:24 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 13:42:24 compute-1 systemd[1]: session-39.scope: Consumed 30.304s CPU time.
Jan 22 13:42:24 compute-1 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Jan 22 13:42:24 compute-1 systemd-logind[787]: Removed session 39.
Jan 22 13:42:25 compute-1 ceph-mon[81715]: pgmap v420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:26 compute-1 ceph-mon[81715]: pgmap v421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:28.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:28 compute-1 ceph-mon[81715]: pgmap v422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:28 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:28.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:30.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:31 compute-1 ceph-mon[81715]: pgmap v423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:42:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:42:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:32 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 13:42:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:33 compute-1 ceph-mon[81715]: pgmap v424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:42:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:42:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:35 compute-1 ceph-mon[81715]: pgmap v425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:35 compute-1 sshd-session[108884]: Accepted publickey for zuul from 192.168.122.30 port 52920 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:42:35 compute-1 systemd-logind[787]: New session 40 of user zuul.
Jan 22 13:42:35 compute-1 systemd[1]: Started Session 40 of User zuul.
Jan 22 13:42:35 compute-1 sshd-session[108884]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:42:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:37 compute-1 ceph-mon[81715]: pgmap v426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:37 compute-1 sudo[109037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmuzukkshelfgguivjwokfjqhjanrmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089355.6582594-24-268774623418957/AnsiballZ_tempfile.py'
Jan 22 13:42:37 compute-1 sudo[109037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:37 compute-1 python3.9[109039]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 13:42:37 compute-1 sudo[109037]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:38.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:38 compute-1 sudo[109189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjvjfurahvnzuamvwjzklxndgrwlhodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089358.210663-60-102030470089147/AnsiballZ_stat.py'
Jan 22 13:42:38 compute-1 sudo[109189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:42:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:42:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:38 compute-1 python3.9[109191]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:42:38 compute-1 sudo[109189]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:39 compute-1 ceph-mon[81715]: pgmap v427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:40 compute-1 sudo[109343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miunrzdnhsgwxukwgynmwwfdgbgafbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089360.1186-84-213365165405936/AnsiballZ_slurp.py'
Jan 22 13:42:40 compute-1 sudo[109343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:40.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:40 compute-1 python3.9[109345]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 13:42:40 compute-1 sudo[109343]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:40 compute-1 ceph-mon[81715]: pgmap v428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:41 compute-1 sudo[109495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkscdmcykhxwyivfkmfbxsteerushji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089361.1579506-108-135509256431941/AnsiballZ_stat.py'
Jan 22 13:42:41 compute-1 sudo[109495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:41 compute-1 python3.9[109497]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible._xxekxtc follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:41 compute-1 sudo[109495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:42.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:42 compute-1 sudo[109620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woykznhreajfvhgdqpcssjptjbdghlii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089361.1579506-108-135509256431941/AnsiballZ_copy.py'
Jan 22 13:42:42 compute-1 sudo[109620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:42 compute-1 python3.9[109622]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible._xxekxtc mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089361.1579506-108-135509256431941/.source._xxekxtc _original_basename=.o1khmyd3 follow=False checksum=9893b3bde8503c371031e4467aece9772279f87c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:42 compute-1 sudo[109620]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:43 compute-1 sudo[109772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnyiysdakkcilbowkmisgswadguwdeil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089362.7211757-153-241039106393979/AnsiballZ_setup.py'
Jan 22 13:42:43 compute-1 sudo[109772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:43 compute-1 ceph-mon[81715]: pgmap v429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:43 compute-1 python3.9[109774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:42:43 compute-1 sudo[109772]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:44.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:44 compute-1 sudo[109924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itodbmedqalfoqularytbdolaosburvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089364.0866659-178-278498064441796/AnsiballZ_blockinfile.py'
Jan 22 13:42:44 compute-1 sudo[109924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:44 compute-1 python3.9[109926]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=
                                              create=True mode=0644 path=/tmp/ansible._xxekxtc state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:45 compute-1 sudo[109924]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:45 compute-1 ceph-mon[81715]: pgmap v430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-1 sudo[110076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaoigurjlqcczaseasjbgzlclmefzmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089365.3417957-202-59488400965493/AnsiballZ_command.py'
Jan 22 13:42:45 compute-1 sudo[110076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:45 compute-1 python3.9[110078]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._xxekxtc' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:46 compute-1 sudo[110076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:46.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:46.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:46 compute-1 sudo[110230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxvvjdyvzhnmiwxfmefxsveaqvkdmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089366.286694-226-265581119558851/AnsiballZ_file.py'
Jan 22 13:42:46 compute-1 sudo[110230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:46 compute-1 ceph-mon[81715]: pgmap v431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:46 compute-1 python3.9[110232]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._xxekxtc state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:46 compute-1 sudo[110230]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:47 compute-1 sshd-session[108887]: Connection closed by 192.168.122.30 port 52920
Jan 22 13:42:47 compute-1 sshd-session[108884]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:42:47 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 13:42:47 compute-1 systemd[1]: session-40.scope: Consumed 5.079s CPU time.
Jan 22 13:42:47 compute-1 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Jan 22 13:42:47 compute-1 systemd-logind[787]: Removed session 40.
Jan 22 13:42:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:47 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:48.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:48 compute-1 ceph-mon[81715]: pgmap v432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:50.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:51 compute-1 ceph-mon[81715]: pgmap v433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:42:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:52.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:42:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:52 compute-1 sshd-session[110257]: Accepted publickey for zuul from 192.168.122.30 port 45512 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:42:52 compute-1 systemd-logind[787]: New session 41 of user zuul.
Jan 22 13:42:52 compute-1 systemd[1]: Started Session 41 of User zuul.
Jan 22 13:42:52 compute-1 sshd-session[110257]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:42:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:53 compute-1 ceph-mon[81715]: pgmap v434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:54 compute-1 python3.9[110410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:42:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:55 compute-1 ceph-mon[81715]: pgmap v435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:55 compute-1 sudo[110564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqnmqgfbbyoarrsyuvnvymzggrplobm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089374.7199662-57-69091481037516/AnsiballZ_systemd.py'
Jan 22 13:42:55 compute-1 sudo[110564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:55 compute-1 python3.9[110566]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:42:55 compute-1 sudo[110564]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:56 compute-1 sudo[110718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spfpwctdbhljswtsshykdmugjqelahsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089376.1509037-81-46429327344887/AnsiballZ_systemd.py'
Jan 22 13:42:56 compute-1 sudo[110718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:56 compute-1 python3.9[110720]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:42:56 compute-1 sudo[110718]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:57 compute-1 ceph-mon[81715]: pgmap v436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:57 compute-1 sudo[110871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxsulzqmtsbxslhblsgmndoegwthekkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089377.2958288-108-144687883196244/AnsiballZ_command.py'
Jan 22 13:42:57 compute-1 sudo[110871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:57 compute-1 python3.9[110873]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:57 compute-1 sudo[110871]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:42:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:58 compute-1 sudo[111024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzmxcjtspdgzjsmotovbqceznsuoylho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089378.3126702-132-183767861791525/AnsiballZ_stat.py'
Jan 22 13:42:58 compute-1 sudo[111024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:58 compute-1 python3.9[111026]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:42:58 compute-1 sudo[111024]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:59 compute-1 ceph-mon[81715]: pgmap v437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:59 compute-1 sudo[111176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edagkjydundjwifdypjbpkkjvnepgpvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089379.2991495-159-68737363679602/AnsiballZ_file.py'
Jan 22 13:42:59 compute-1 sudo[111176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:59 compute-1 python3.9[111178]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:59 compute-1 sudo[111176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:00 compute-1 sshd-session[110260]: Connection closed by 192.168.122.30 port 45512
Jan 22 13:43:00 compute-1 sshd-session[110257]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:00 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 13:43:00 compute-1 systemd[1]: session-41.scope: Consumed 3.843s CPU time.
Jan 22 13:43:00 compute-1 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Jan 22 13:43:00 compute-1 systemd-logind[787]: Removed session 41.
Jan 22 13:43:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:01 compute-1 ceph-mon[81715]: pgmap v438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:02 compute-1 ceph-mon[81715]: pgmap v439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:02 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:04.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:05 compute-1 ceph-mon[81715]: pgmap v440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:06 compute-1 sshd-session[111203]: Accepted publickey for zuul from 192.168.122.30 port 55492 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:43:06 compute-1 systemd-logind[787]: New session 42 of user zuul.
Jan 22 13:43:06 compute-1 systemd[1]: Started Session 42 of User zuul.
Jan 22 13:43:06 compute-1 sshd-session[111203]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:43:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:07 compute-1 python3.9[111356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:43:07 compute-1 ceph-mon[81715]: pgmap v441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-1 sudo[111510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uznwlgwvlyfaxgcxqenrzgqnsuhtjdrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089387.7518442-63-216639377970518/AnsiballZ_setup.py'
Jan 22 13:43:08 compute-1 sudo[111510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:08 compute-1 python3.9[111512]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:43:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-1 sudo[111510]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:09 compute-1 sudo[111594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whmpsqbzejhocuniagyhfjedhbnkgutd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089387.7518442-63-216639377970518/AnsiballZ_dnf.py'
Jan 22 13:43:09 compute-1 sudo[111594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:09 compute-1 python3.9[111596]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:43:09 compute-1 ceph-mon[81715]: pgmap v442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:09 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:10 compute-1 sudo[111594]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:11 compute-1 ceph-mon[81715]: pgmap v443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:11 compute-1 python3.9[111747]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:43:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:13 compute-1 python3.9[111898]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:43:13 compute-1 ceph-mon[81715]: pgmap v444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:13 compute-1 python3.9[112048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:43:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:14 compute-1 python3.9[112198]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:43:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:15 compute-1 sshd-session[111206]: Connection closed by 192.168.122.30 port 55492
Jan 22 13:43:15 compute-1 sshd-session[111203]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:15 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 13:43:15 compute-1 systemd[1]: session-42.scope: Consumed 5.879s CPU time.
Jan 22 13:43:15 compute-1 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Jan 22 13:43:15 compute-1 systemd-logind[787]: Removed session 42.
Jan 22 13:43:15 compute-1 ceph-mon[81715]: pgmap v445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:16 compute-1 ceph-mon[81715]: pgmap v446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:16.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:18 compute-1 ceph-mon[81715]: pgmap v447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:18.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:21 compute-1 sudo[112225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:21 compute-1 sudo[112225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-1 sshd-session[112223]: Accepted publickey for zuul from 192.168.122.30 port 51998 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:43:21 compute-1 sudo[112225]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-1 systemd-logind[787]: New session 43 of user zuul.
Jan 22 13:43:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:21 compute-1 systemd[1]: Started Session 43 of User zuul.
Jan 22 13:43:21 compute-1 sshd-session[112223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:43:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:21 compute-1 sudo[112251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:43:21 compute-1 sudo[112251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-1 sudo[112251]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-1 sudo[112277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:21 compute-1 sudo[112277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-1 sudo[112277]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-1 sudo[112325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:43:21 compute-1 sudo[112325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-1 sudo[112325]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:22 compute-1 ceph-mon[81715]: pgmap v448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:22 compute-1 python3.9[112508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:43:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:23 compute-1 ceph-mon[81715]: pgmap v449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:23 compute-1 sudo[112662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytuozzpmhbqqxlvmkuvckkouljdrswxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089403.5141392-111-28117111752416/AnsiballZ_file.py'
Jan 22 13:43:23 compute-1 sudo[112662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:24 compute-1 python3.9[112664]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:24 compute-1 sudo[112662]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:43:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:43:24 compute-1 sudo[112814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzpguusjwmtuhzaktngrdkdknbzfcfrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089404.3745632-111-236575937605589/AnsiballZ_file.py'
Jan 22 13:43:24 compute-1 sudo[112814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:24.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:24 compute-1 python3.9[112816]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:24 compute-1 sudo[112814]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:25 compute-1 sudo[112966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwfkeopdeebebqslvflmgkflnhozvrxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089405.0622706-155-92267369894686/AnsiballZ_stat.py'
Jan 22 13:43:25 compute-1 sudo[112966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:25 compute-1 ceph-mon[81715]: pgmap v450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:25 compute-1 python3.9[112968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:25 compute-1 sudo[112966]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:26 compute-1 sudo[113089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxmqsuxswmptlzimmsltzpodqcsckdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089405.0622706-155-92267369894686/AnsiballZ_copy.py'
Jan 22 13:43:26 compute-1 sudo[113089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:26 compute-1 python3.9[113091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089405.0622706-155-92267369894686/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=45a6f40b402a0f4b7a12be1b6902e3f2431fd4a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:26 compute-1 sudo[113089]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:26 compute-1 sudo[113241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvptuzmpcrjjnuimfsypdyhwaivedvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089406.7139595-155-247888994883335/AnsiballZ_stat.py'
Jan 22 13:43:26 compute-1 sudo[113241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:27 compute-1 python3.9[113243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:27 compute-1 sudo[113241]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:27 compute-1 sudo[113364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnzzekcnezqeelqvjmivhlbhcmtqldl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089406.7139595-155-247888994883335/AnsiballZ_copy.py'
Jan 22 13:43:27 compute-1 sudo[113364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:27 compute-1 ceph-mon[81715]: pgmap v451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:27 compute-1 python3.9[113366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089406.7139595-155-247888994883335/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=cc1c70588824ebebf3437effcc8b7daf397d0332 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:27 compute-1 sudo[113364]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:28 compute-1 sudo[113516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhqediywmhdssjchwahfdvrgowihcmvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089407.8758016-155-204376858758214/AnsiballZ_stat.py'
Jan 22 13:43:28 compute-1 sudo[113516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:28 compute-1 python3.9[113518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:28 compute-1 sudo[113516]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:28 compute-1 sudo[113639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbhvixxffssaakhfhrhqmqisxxqmlrnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089407.8758016-155-204376858758214/AnsiballZ_copy.py'
Jan 22 13:43:28 compute-1 sudo[113639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:28 compute-1 ceph-mon[81715]: pgmap v452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:28 compute-1 python3.9[113641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089407.8758016-155-204376858758214/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=e5bff03c51cae308bb9493d7cdb7c5ec290ee48d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:28 compute-1 sudo[113639]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:29 compute-1 sudo[113791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqxcudxfkvibznupxcoghobcqgaoasv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089409.1456668-283-249641548580788/AnsiballZ_file.py'
Jan 22 13:43:29 compute-1 sudo[113791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:29 compute-1 python3.9[113793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:29 compute-1 sudo[113791]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:30 compute-1 sudo[113943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqgofhmvshjymdxwsdmoaawneozgtkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089409.7683046-283-242801926253353/AnsiballZ_file.py'
Jan 22 13:43:30 compute-1 sudo[113943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:30 compute-1 python3.9[113945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:30 compute-1 sudo[113943]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:30 compute-1 sudo[114095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxjhgxkkwmssuustegpyohkigvlouga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089410.4250064-326-39280101560777/AnsiballZ_stat.py'
Jan 22 13:43:30 compute-1 sudo[114095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:30.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:30 compute-1 sudo[114098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:30 compute-1 sudo[114098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:30 compute-1 sudo[114098]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:30 compute-1 ceph-mon[81715]: pgmap v453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:30 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:30 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:30 compute-1 python3.9[114097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:30 compute-1 sudo[114095]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:30 compute-1 sudo[114123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:43:30 compute-1 sudo[114123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:30 compute-1 sudo[114123]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-1 sudo[114268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znobuguuypyjbfdlxqhdgxpblzpwjmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089410.4250064-326-39280101560777/AnsiballZ_copy.py'
Jan 22 13:43:31 compute-1 sudo[114268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:31 compute-1 python3.9[114270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089410.4250064-326-39280101560777/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=9a7f8c9243bfe06a5e62a169a5db356d4082d0fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:31 compute-1 sudo[114268]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-1 sudo[114420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogpgcnkadntemtgkywmyytooqsjlqgqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089411.5760612-326-212427590257698/AnsiballZ_stat.py'
Jan 22 13:43:31 compute-1 sudo[114420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:32 compute-1 python3.9[114422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:32 compute-1 sudo[114420]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:32 compute-1 sudo[114543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugwjspymavqaajmcgypgazkqfhedwhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089411.5760612-326-212427590257698/AnsiballZ_copy.py'
Jan 22 13:43:32 compute-1 sudo[114543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:32 compute-1 python3.9[114545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089411.5760612-326-212427590257698/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:32 compute-1 sudo[114543]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:32.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:32 compute-1 ceph-mon[81715]: pgmap v454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:33 compute-1 sudo[114695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bubvrowuhzftyoqlcrptjqqpzjexobym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089412.754164-326-276235505967771/AnsiballZ_stat.py'
Jan 22 13:43:33 compute-1 sudo[114695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:33 compute-1 python3.9[114697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:33 compute-1 sudo[114695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:33 compute-1 sudo[114818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvzcrvxuuwzhkgxmfgnenyylhwyrisdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089412.754164-326-276235505967771/AnsiballZ_copy.py'
Jan 22 13:43:33 compute-1 sudo[114818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:33 compute-1 python3.9[114820]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089412.754164-326-276235505967771/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=1dc995048b00a644a460f48b58c367088ca51907 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:33 compute-1 sudo[114818]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:34 compute-1 sudo[114970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcuxcduaexruvvyezviutoqocmemzvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089413.9808133-451-69414412726424/AnsiballZ_file.py'
Jan 22 13:43:34 compute-1 sudo[114970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:34 compute-1 python3.9[114972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:34 compute-1 sudo[114970]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:34.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:34 compute-1 sudo[115122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglhiegwwntdrzxgawzkyqjlrbwrojso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089414.6134892-451-120405457999189/AnsiballZ_file.py'
Jan 22 13:43:34 compute-1 sudo[115122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:35 compute-1 python3.9[115124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:35 compute-1 sudo[115122]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:35 compute-1 ceph-mon[81715]: pgmap v455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:35 compute-1 sudo[115274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzlqtbcuqamrdurjbqhkftqclhxzvoqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089415.2755413-498-124885537614236/AnsiballZ_stat.py'
Jan 22 13:43:35 compute-1 sudo[115274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:35 compute-1 python3.9[115276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:35 compute-1 sudo[115274]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:36 compute-1 sudo[115397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngspepudrycnzrqgqxjnujlencvqzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089415.2755413-498-124885537614236/AnsiballZ_copy.py'
Jan 22 13:43:36 compute-1 sudo[115397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:36 compute-1 python3.9[115399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089415.2755413-498-124885537614236/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=202ca5a0fe6e8422be7d63e3db24707225b535c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:36 compute-1 sudo[115397]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:36 compute-1 sudo[115549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkuysooqdhjponzayyotfscebidzpxim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089416.4914997-498-123452227466045/AnsiballZ_stat.py'
Jan 22 13:43:36 compute-1 sudo[115549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:37 compute-1 python3.9[115551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:37 compute-1 sudo[115549]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:37 compute-1 ceph-mon[81715]: pgmap v456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:37 compute-1 sudo[115672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngpilzjimqfokuitcainorxtuaqceymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089416.4914997-498-123452227466045/AnsiballZ_copy.py'
Jan 22 13:43:37 compute-1 sudo[115672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:37 compute-1 python3.9[115674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089416.4914997-498-123452227466045/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:37 compute-1 sudo[115672]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:38 compute-1 sudo[115824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhmpqltylojnmamnjbwzcwzmczmrmld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089417.7513394-498-94923360586094/AnsiballZ_stat.py'
Jan 22 13:43:38 compute-1 sudo[115824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:38 compute-1 python3.9[115826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:38 compute-1 sudo[115824]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:38 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:38 compute-1 sudo[115947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzyvtqensjvfqncvyboosdemndcnhbhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089417.7513394-498-94923360586094/AnsiballZ_copy.py'
Jan 22 13:43:38 compute-1 sudo[115947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:38 compute-1 python3.9[115949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089417.7513394-498-94923360586094/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=4258078fcdb3d37440c80fd4a45a43efed1545fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:38 compute-1 sudo[115947]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:39 compute-1 ceph-mon[81715]: pgmap v457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:39 compute-1 sudo[116099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duwxpgeldkfdwumupenfoodcmmfkdcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089419.5605257-660-155586531213540/AnsiballZ_file.py'
Jan 22 13:43:39 compute-1 sudo[116099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:40 compute-1 python3.9[116101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:40 compute-1 sudo[116099]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:40 compute-1 sudo[116251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcqjfitvkmryvgfspeggugvfydvvdrpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089420.252835-688-128828306301086/AnsiballZ_stat.py'
Jan 22 13:43:40 compute-1 sudo[116251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:40 compute-1 python3.9[116253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:40 compute-1 sudo[116251]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:40.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:41 compute-1 sudo[116374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddxcblysijsgxmftgmjwcquxtpmwfwbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089420.252835-688-128828306301086/AnsiballZ_copy.py'
Jan 22 13:43:41 compute-1 sudo[116374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:41 compute-1 python3.9[116376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089420.252835-688-128828306301086/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:41 compute-1 sudo[116374]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:41 compute-1 ceph-mon[81715]: pgmap v458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:41 compute-1 sudo[116526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmocbbmgowuafhfyghhvjfhmmdrvlpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089421.496998-744-230793948815255/AnsiballZ_file.py'
Jan 22 13:43:41 compute-1 sudo[116526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:41 compute-1 python3.9[116528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:42 compute-1 sudo[116526]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:42 compute-1 sudo[116678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjtgifjullsmwcmulionoftafravltv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089422.1715617-768-185721899192685/AnsiballZ_stat.py'
Jan 22 13:43:42 compute-1 sudo[116678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:42 compute-1 python3.9[116680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:42 compute-1 sudo[116678]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:42 compute-1 ceph-mon[81715]: pgmap v459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:42.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:43 compute-1 sudo[116801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iizjuemxedapkktruelmnktbxzajjllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089422.1715617-768-185721899192685/AnsiballZ_copy.py'
Jan 22 13:43:43 compute-1 sudo[116801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:43 compute-1 python3.9[116803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089422.1715617-768-185721899192685/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:43 compute-1 sudo[116801]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:43 compute-1 sudo[116953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lifstkwcgcqvkfnvkljzopluclpsqykq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089423.5285463-817-122326576189867/AnsiballZ_file.py'
Jan 22 13:43:43 compute-1 sudo[116953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:44 compute-1 python3.9[116955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:44 compute-1 sudo[116953]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:44 compute-1 sudo[117105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbriptdhphbjozdbczkqdzutgkidzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089424.235006-845-115837118164524/AnsiballZ_stat.py'
Jan 22 13:43:44 compute-1 sudo[117105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:44 compute-1 python3.9[117107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:44 compute-1 sudo[117105]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:44.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:44 compute-1 ceph-mon[81715]: pgmap v460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:45 compute-1 sudo[117228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzygihkopxsdbmjmnxtbltkvricuard ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089424.235006-845-115837118164524/AnsiballZ_copy.py'
Jan 22 13:43:45 compute-1 sudo[117228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:45 compute-1 python3.9[117230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089424.235006-845-115837118164524/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:45 compute-1 sudo[117228]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:45 compute-1 sudo[117380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdrsvvhtdsapyfluhcdoeklahyqwetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089425.5918567-894-199691640018259/AnsiballZ_file.py'
Jan 22 13:43:45 compute-1 sudo[117380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:46 compute-1 python3.9[117382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:46 compute-1 sudo[117380]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:46 compute-1 sudo[117532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtieapbfhapwcnunphsnavfkgwkvyzlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089426.27831-916-210134406126293/AnsiballZ_stat.py'
Jan 22 13:43:46 compute-1 sudo[117532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:46 compute-1 python3.9[117534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:46 compute-1 sudo[117532]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:46 compute-1 ceph-mon[81715]: pgmap v461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:47 compute-1 sudo[117655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwlmtveunyrfdlaokkfbjjvreffbumfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089426.27831-916-210134406126293/AnsiballZ_copy.py'
Jan 22 13:43:47 compute-1 sudo[117655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:47 compute-1 python3.9[117657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089426.27831-916-210134406126293/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:47 compute-1 sudo[117655]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:47 compute-1 sudo[117807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzxaxhkkqlattbipfzcbddgmcopovpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089427.6459181-963-164043903680837/AnsiballZ_file.py'
Jan 22 13:43:47 compute-1 sudo[117807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:48 compute-1 python3.9[117809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:48 compute-1 sudo[117807]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:48 compute-1 sudo[117959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kddyktrbftnnxeepqzswghgdmxadakon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089428.3206208-987-131121250486384/AnsiballZ_stat.py'
Jan 22 13:43:48 compute-1 sudo[117959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:48 compute-1 python3.9[117961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:48 compute-1 sudo[117959]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:48.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:49 compute-1 ceph-mon[81715]: pgmap v462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:49 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:49 compute-1 sudo[118082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmgncctoeznsyrvcrtkhkzofvtxurgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089428.3206208-987-131121250486384/AnsiballZ_copy.py'
Jan 22 13:43:49 compute-1 sudo[118082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:49 compute-1 python3.9[118084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089428.3206208-987-131121250486384/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:49 compute-1 sudo[118082]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:49 compute-1 sudo[118234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djydbqufdjwkbkfpebjvtyafdkmajukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089429.6210916-1031-128037174947747/AnsiballZ_file.py'
Jan 22 13:43:49 compute-1 sudo[118234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:50 compute-1 python3.9[118236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:50 compute-1 sudo[118234]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:50 compute-1 sudo[118386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtrgoisvzfrntuutakfeaciezklapsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089430.2663488-1054-179678444151943/AnsiballZ_stat.py'
Jan 22 13:43:50 compute-1 sudo[118386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:50 compute-1 python3.9[118388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:50 compute-1 sudo[118386]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:51 compute-1 ceph-mon[81715]: pgmap v463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:51 compute-1 sudo[118509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqxscpulnffofnxyfgeyhwlmzxgliuqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089430.2663488-1054-179678444151943/AnsiballZ_copy.py'
Jan 22 13:43:51 compute-1 sudo[118509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:51 compute-1 python3.9[118511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089430.2663488-1054-179678444151943/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:51 compute-1 sudo[118509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:53 compute-1 ceph-mon[81715]: pgmap v464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:54 compute-1 sshd-session[112264]: Connection closed by 192.168.122.30 port 51998
Jan 22 13:43:54 compute-1 sshd-session[112223]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:54 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 13:43:54 compute-1 systemd[1]: session-43.scope: Consumed 22.612s CPU time.
Jan 22 13:43:54 compute-1 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Jan 22 13:43:54 compute-1 systemd-logind[787]: Removed session 43.
Jan 22 13:43:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:54.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:55 compute-1 ceph-mon[81715]: pgmap v465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:57 compute-1 ceph-mon[81715]: pgmap v466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:43:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:59.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:59 compute-1 ceph-mon[81715]: pgmap v467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-1 sshd-session[118536]: Accepted publickey for zuul from 192.168.122.30 port 58204 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:44:00 compute-1 systemd-logind[787]: New session 44 of user zuul.
Jan 22 13:44:00 compute-1 systemd[1]: Started Session 44 of User zuul.
Jan 22 13:44:00 compute-1 sshd-session[118536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:44:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-1 ceph-mon[81715]: pgmap v468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-1 sudo[118689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drrrplhbksoojhdohbukkiqlvqqebvqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089440.3448067-27-234027171929194/AnsiballZ_file.py'
Jan 22 13:44:00 compute-1 sudo[118689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:01 compute-1 python3.9[118691]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:01 compute-1 sudo[118689]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:01 compute-1 sudo[118841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zczvdeglqfqhyzfgrexyuubexlzpekbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089441.301294-63-89906202216053/AnsiballZ_stat.py'
Jan 22 13:44:01 compute-1 sudo[118841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:01 compute-1 python3.9[118843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:01 compute-1 sudo[118841]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:02 compute-1 sudo[118964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysnfxeqffbewgskorpkkvtjepmvlsnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089441.301294-63-89906202216053/AnsiballZ_copy.py'
Jan 22 13:44:02 compute-1 sudo[118964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:02 compute-1 python3.9[118966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089441.301294-63-89906202216053/.source.conf _original_basename=ceph.conf follow=False checksum=c3a8ec6ec08fd3904e44a403280c0742b2934d96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:02 compute-1 sudo[118964]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:02 compute-1 ceph-mon[81715]: pgmap v469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:02 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:44:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:02.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:44:03 compute-1 sudo[119116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrvazpaikkhzqkysedusilikaxsintd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089442.8524778-63-252387533520543/AnsiballZ_stat.py'
Jan 22 13:44:03 compute-1 sudo[119116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:03 compute-1 python3.9[119118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:03 compute-1 sudo[119116]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:03 compute-1 sudo[119239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxwdwlejvsnevnaaavhwveruxsdojyef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089442.8524778-63-252387533520543/AnsiballZ_copy.py'
Jan 22 13:44:03 compute-1 sudo[119239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:03 compute-1 python3.9[119241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089442.8524778-63-252387533520543/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:03 compute-1 sudo[119239]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:04 compute-1 sshd-session[118539]: Connection closed by 192.168.122.30 port 58204
Jan 22 13:44:04 compute-1 sshd-session[118536]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:44:04 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 13:44:04 compute-1 systemd[1]: session-44.scope: Consumed 2.666s CPU time.
Jan 22 13:44:04 compute-1 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Jan 22 13:44:04 compute-1 systemd-logind[787]: Removed session 44.
Jan 22 13:44:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:04.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:05 compute-1 ceph-mon[81715]: pgmap v470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:06.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:07 compute-1 ceph-mon[81715]: pgmap v471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:44:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:44:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:09 compute-1 ceph-mon[81715]: pgmap v472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:09 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:09 compute-1 sshd-session[119266]: Accepted publickey for zuul from 192.168.122.30 port 54118 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:44:09 compute-1 systemd-logind[787]: New session 45 of user zuul.
Jan 22 13:44:09 compute-1 systemd[1]: Started Session 45 of User zuul.
Jan 22 13:44:09 compute-1 sshd-session[119266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:44:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:10 compute-1 python3.9[119419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:11 compute-1 ceph-mon[81715]: pgmap v473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:11.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:12 compute-1 sudo[119573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwxnanrqmlpinxogpvftltpmkhroxhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089451.5945704-63-278047276671716/AnsiballZ_file.py'
Jan 22 13:44:12 compute-1 sudo[119573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:12 compute-1 python3.9[119575]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:12 compute-1 sudo[119573]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:12 compute-1 sudo[119725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqguwsgnqgwrlkumdfyvfrlkypfinhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089452.5899923-63-76479299666702/AnsiballZ_file.py'
Jan 22 13:44:12 compute-1 sudo[119725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:13 compute-1 python3.9[119727]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:13 compute-1 sudo[119725]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:13 compute-1 ceph-mon[81715]: pgmap v474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:13 compute-1 python3.9[119877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:14 compute-1 sudo[120027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodwxktsowovfwcfkcuvyfumdqlhjpkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089454.187155-132-207951729413091/AnsiballZ_seboolean.py'
Jan 22 13:44:14 compute-1 sudo[120027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:14 compute-1 python3.9[120029]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 13:44:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:15 compute-1 ceph-mon[81715]: pgmap v475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:16 compute-1 sudo[120027]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:17 compute-1 sudo[120183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemhmajjxavegwnwzxhywuzwvpcpjmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089456.725322-162-51321047258806/AnsiballZ_setup.py'
Jan 22 13:44:17 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 13:44:17 compute-1 sudo[120183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:17 compute-1 python3.9[120185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:44:17 compute-1 ceph-mon[81715]: pgmap v476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:17 compute-1 sudo[120183]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:17 compute-1 sudo[120267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewgincthbovzxexyzulveslhrwvylnyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089456.725322-162-51321047258806/AnsiballZ_dnf.py'
Jan 22 13:44:17 compute-1 sudo[120267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:18 compute-1 python3.9[120269]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:44:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:18.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:19 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:19 compute-1 sudo[120267]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:20 compute-1 ceph-mon[81715]: pgmap v477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:20 compute-1 sudo[120420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgfabbwbscvczfpukljrqqvahcvabomc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089460.0733924-198-217114014867723/AnsiballZ_systemd.py'
Jan 22 13:44:20 compute-1 sudo[120420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:20 compute-1 python3.9[120422]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:44:21 compute-1 sudo[120420]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:21 compute-1 ceph-mon[81715]: pgmap v478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:21 compute-1 sudo[120575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxraodqvqbubuyikguwtsvtmrrffmhvm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089461.3488822-222-164611070710110/AnsiballZ_edpm_nftables_snippet.py'
Jan 22 13:44:21 compute-1 sudo[120575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:22 compute-1 python3[120577]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 13:44:22 compute-1 sudo[120575]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:22 compute-1 sudo[120727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitalndxowixghxfmwnfrarfjnfjnnvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089462.4525058-249-170386573932120/AnsiballZ_file.py'
Jan 22 13:44:22 compute-1 sudo[120727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:22 compute-1 python3.9[120729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:22 compute-1 sudo[120727]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:23 compute-1 sudo[120879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqzwqlsbckhrtrkuwxohxnltukefrexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089463.2067308-273-279794925988203/AnsiballZ_stat.py'
Jan 22 13:44:23 compute-1 sudo[120879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:23 compute-1 ceph-mon[81715]: pgmap v479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:23 compute-1 python3.9[120881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:23 compute-1 sudo[120879]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:24 compute-1 sudo[120957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsngwsxzarwwgpgncuausaxelojjzsoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089463.2067308-273-279794925988203/AnsiballZ_file.py'
Jan 22 13:44:24 compute-1 sudo[120957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:24 compute-1 python3.9[120959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:24 compute-1 sudo[120957]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:24 compute-1 ceph-mon[81715]: pgmap v480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:24 compute-1 sudo[121109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbtabpixgnwigyxftquxjsqivnvjsmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089464.633933-309-207813761133824/AnsiballZ_stat.py'
Jan 22 13:44:24 compute-1 sudo[121109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:25 compute-1 python3.9[121111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:25 compute-1 sudo[121109]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:25 compute-1 sudo[121187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytmlwvsiyqrlttmczlpzgvfwhcmozrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089464.633933-309-207813761133824/AnsiballZ_file.py'
Jan 22 13:44:25 compute-1 sudo[121187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:25 compute-1 python3.9[121189]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.f69enjq8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:25 compute-1 sudo[121187]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:26 compute-1 sudo[121339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtlahisozbhtuizldjkqjrasbewspkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089465.9306812-345-8514575435583/AnsiballZ_stat.py'
Jan 22 13:44:26 compute-1 sudo[121339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:26 compute-1 python3.9[121341]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:26 compute-1 sudo[121339]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:26 compute-1 ceph-mon[81715]: pgmap v481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:26 compute-1 sudo[121417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbsfxigihabxxtoeeqleyeymulkwhces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089465.9306812-345-8514575435583/AnsiballZ_file.py'
Jan 22 13:44:26 compute-1 sudo[121417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:26 compute-1 python3.9[121419]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:26 compute-1 sudo[121417]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:27 compute-1 sudo[121569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javwzcigfyjmkqebuoqybmjpzwaojcrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089467.2686756-384-251240450663896/AnsiballZ_command.py'
Jan 22 13:44:27 compute-1 sudo[121569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:28 compute-1 python3.9[121571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:28 compute-1 sudo[121569]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:28 compute-1 sudo[121722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxquvqioatpgrpoczarhzbrlkkntgam ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089468.236339-408-116052019179606/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:44:28 compute-1 sudo[121722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:28 compute-1 python3[121724]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:44:28 compute-1 sudo[121722]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:29 compute-1 ceph-mon[81715]: pgmap v482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:29 compute-1 sudo[121874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyqddwexepqmjhspjnrjknabhfzqolxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089469.4487805-432-279008555723522/AnsiballZ_stat.py'
Jan 22 13:44:29 compute-1 sudo[121874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:29 compute-1 python3.9[121876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:30 compute-1 sudo[121874]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:30 compute-1 sudo[121999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdwmbzesqkunlvywxhfjurisgkjaeaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089469.4487805-432-279008555723522/AnsiballZ_copy.py'
Jan 22 13:44:30 compute-1 sudo[121999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:30 compute-1 python3.9[122001]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089469.4487805-432-279008555723522/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:30 compute-1 sudo[121999]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:31 compute-1 sudo[122038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-1 sudo[122038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122038]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:44:31 compute-1 sudo[122094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122094]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-1 sudo[122128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122128]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 ceph-mon[81715]: pgmap v483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:31 compute-1 sudo[122176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:44:31 compute-1 sudo[122176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:31 compute-1 sudo[122251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-almriosfavcoshvrihxeygxdqplplcai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089471.0949771-477-252415074622759/AnsiballZ_stat.py'
Jan 22 13:44:31 compute-1 sudo[122251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:31 compute-1 sudo[122176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 python3.9[122253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:31 compute-1 sudo[122273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-1 sudo[122251]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122273]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:44:31 compute-1 sudo[122300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122300]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-1 sudo[122348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-1 sudo[122348]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-1 sudo[122387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:44:31 compute-1 sudo[122387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:32 compute-1 sudo[122509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvvwhrpplpznedssmhurfmogzlpukrja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089471.0949771-477-252415074622759/AnsiballZ_copy.py'
Jan 22 13:44:32 compute-1 sudo[122509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:32 compute-1 python3.9[122511]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089471.0949771-477-252415074622759/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:32 compute-1 sudo[122387]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:32 compute-1 sudo[122509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:32 compute-1 sudo[122678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrzwqmsjhjjsiwsolzfnjmpxthwpjjsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089472.5086403-522-111495984059400/AnsiballZ_stat.py'
Jan 22 13:44:32 compute-1 sudo[122678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:33 compute-1 python3.9[122680]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:44:33 compute-1 sudo[122678]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-1 ceph-mon[81715]: pgmap v484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:33 compute-1 sudo[122803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouetzxgawakladbslveoqlbihslotmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089472.5086403-522-111495984059400/AnsiballZ_copy.py'
Jan 22 13:44:33 compute-1 sudo[122803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:33 compute-1 python3.9[122805]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089472.5086403-522-111495984059400/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:33 compute-1 sudo[122803]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:34 compute-1 sudo[122955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjadkuqvtfawpajshysynzgxjnifzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089473.9480953-567-130309993562111/AnsiballZ_stat.py'
Jan 22 13:44:34 compute-1 sudo[122955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:34 compute-1 python3.9[122957]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:34 compute-1 sudo[122955]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:44:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Cumulative writes: 6027 writes, 25K keys, 6027 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6027 writes, 961 syncs, 6.27 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6027 writes, 25K keys, 6027 commit groups, 1.0 writes per commit group, ingest: 19.25 MB, 0.03 MB/s
                                           Interval WAL: 6027 writes, 961 syncs, 6.27 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:44:34 compute-1 sudo[123080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehodbhavwemsuokzbjtftccmtcfmmhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089473.9480953-567-130309993562111/AnsiballZ_copy.py'
Jan 22 13:44:34 compute-1 sudo[123080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:34 compute-1 python3.9[123082]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089473.9480953-567-130309993562111/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:35 compute-1 sudo[123080]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:35 compute-1 ceph-mon[81715]: pgmap v485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:35 compute-1 sudo[123232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wngcztibibmuvkfmtfwvrtaflwyvokny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089475.3187177-612-238158840577734/AnsiballZ_stat.py'
Jan 22 13:44:35 compute-1 sudo[123232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:35 compute-1 python3.9[123234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:35 compute-1 sudo[123232]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:36 compute-1 sudo[123357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcqevppvickzkqpjybxahsoasejfdbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089475.3187177-612-238158840577734/AnsiballZ_copy.py'
Jan 22 13:44:36 compute-1 sudo[123357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:36 compute-1 python3.9[123359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089475.3187177-612-238158840577734/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:36 compute-1 sudo[123357]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:37 compute-1 sudo[123509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbbwbnaqktmcfwdhgoadvevbbjotodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089476.70282-657-187542686699011/AnsiballZ_file.py'
Jan 22 13:44:37 compute-1 sudo[123509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:37 compute-1 python3.9[123511]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:37 compute-1 sudo[123509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:37 compute-1 ceph-mon[81715]: pgmap v486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:37 compute-1 sudo[123661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewjqxbkfnruwugulvspvphbdemkpeaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089477.4629443-681-257108899002649/AnsiballZ_command.py'
Jan 22 13:44:37 compute-1 sudo[123661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:38 compute-1 python3.9[123663]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:38 compute-1 sudo[123661]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:38 compute-1 sudo[123816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohzvdbznojyoxcadiorlweggrfcfrafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089478.2862492-705-208865211963617/AnsiballZ_blockinfile.py'
Jan 22 13:44:38 compute-1 sudo[123816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:38 compute-1 python3.9[123818]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:38 compute-1 sudo[123816]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:39 compute-1 ceph-mon[81715]: pgmap v487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:39 compute-1 sudo[123968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckjurfuamwkqfhmchtgienyhxdmempix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089479.3516138-732-15416884536400/AnsiballZ_command.py'
Jan 22 13:44:39 compute-1 sudo[123968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:39 compute-1 python3.9[123970]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:39 compute-1 sudo[123968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-1 sudo[124013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:40 compute-1 sudo[124013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:40 compute-1 sudo[124013]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-1 sudo[124051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:44:40 compute-1 sudo[124051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:40 compute-1 sudo[124051]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-1 sudo[124171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwuuizusdjunccrmxcbwwfegipthjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089480.2271757-756-1258543444240/AnsiballZ_stat.py'
Jan 22 13:44:40 compute-1 sudo[124171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:40 compute-1 python3.9[124173]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:44:40 compute-1 sudo[124171]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:41 compute-1 ceph-mon[81715]: pgmap v488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:41 compute-1 sudo[124325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqpvelxrekujepxbttthnbilzhcovvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089480.9491308-780-184641122602511/AnsiballZ_command.py'
Jan 22 13:44:41 compute-1 sudo[124325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:41 compute-1 python3.9[124327]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:41 compute-1 sudo[124325]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:42 compute-1 sudo[124480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txldwqleukjjztvyetjgcqctvqjgqozy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089481.7199316-804-103621809716745/AnsiballZ_file.py'
Jan 22 13:44:42 compute-1 sudo[124480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:42 compute-1 python3.9[124482]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:42 compute-1 sudo[124480]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:43 compute-1 ceph-mon[81715]: pgmap v489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:43 compute-1 python3.9[124632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:44 compute-1 sudo[124783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrwgvwjzvutktitubopuhgbbfcmyacjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089484.6167834-924-174535102636217/AnsiballZ_command.py'
Jan 22 13:44:44 compute-1 sudo[124783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:45 compute-1 python3.9[124785]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:45 compute-1 ovs-vsctl[124786]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 13:44:45 compute-1 sudo[124783]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:45 compute-1 ceph-mon[81715]: pgmap v490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:45 compute-1 sudo[124936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-titognfmvoiewpgvzwfjazdtqkpkfjrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089485.5424345-951-262455449322251/AnsiballZ_command.py'
Jan 22 13:44:45 compute-1 sudo[124936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:46 compute-1 python3.9[124938]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:46 compute-1 sudo[124936]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:46 compute-1 sudo[125091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejttiozvwlatojkielsllvvlwubvbpmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089486.4094272-975-9858337225566/AnsiballZ_command.py'
Jan 22 13:44:46 compute-1 sudo[125091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:46 compute-1 python3.9[125093]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:46 compute-1 ovs-vsctl[125094]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 13:44:46 compute-1 sudo[125091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:46 compute-1 ceph-mon[81715]: pgmap v491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:47 compute-1 python3.9[125244]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:44:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:48 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:48 compute-1 sudo[125397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogkhbwooxchguviqacaekakkuhfqwba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089488.2282581-1026-232245570548049/AnsiballZ_file.py'
Jan 22 13:44:48 compute-1 sudo[125397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:48 compute-1 python3.9[125399]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:48 compute-1 sudo[125397]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:49 compute-1 ceph-mon[81715]: pgmap v492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:49 compute-1 sudo[125549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghidhoyccdszqfomcxokvmtgselyopg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089489.0068083-1050-68672555453897/AnsiballZ_stat.py'
Jan 22 13:44:49 compute-1 sudo[125549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:49 compute-1 python3.9[125551]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:49 compute-1 sudo[125549]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:49 compute-1 sudo[125627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsaqwjzzfnlnefgbqytzaaxpbfvekgsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089489.0068083-1050-68672555453897/AnsiballZ_file.py'
Jan 22 13:44:49 compute-1 sudo[125627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:50 compute-1 python3.9[125629]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:50 compute-1 sudo[125627]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:50 compute-1 sudo[125779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxjfylhualqzodgtnwxlpoqxhktsqind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089490.1991248-1050-158075091990891/AnsiballZ_stat.py'
Jan 22 13:44:50 compute-1 sudo[125779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:50 compute-1 python3.9[125781]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:50 compute-1 sudo[125779]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:51 compute-1 sudo[125857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacltmxdxofefyilfavrbmqfnwvlfqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089490.1991248-1050-158075091990891/AnsiballZ_file.py'
Jan 22 13:44:51 compute-1 sudo[125857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:51 compute-1 python3.9[125859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:51 compute-1 sudo[125857]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:44:51 compute-1 ceph-mon[81715]: pgmap v493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:51 compute-1 sudo[126009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwtotcroygjpvpkowlsijqhuviyikza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089491.4941242-1119-255141068187610/AnsiballZ_file.py'
Jan 22 13:44:51 compute-1 sudo[126009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:51 compute-1 python3.9[126011]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:51 compute-1 sudo[126009]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:52 compute-1 sudo[126161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyysafzcuvfezvwaskvdqcbwjbtkxpqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089492.3111677-1143-69644462576999/AnsiballZ_stat.py'
Jan 22 13:44:52 compute-1 sudo[126161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:52 compute-1 ceph-mon[81715]: pgmap v494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:52 compute-1 python3.9[126163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:52 compute-1 sudo[126161]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:53 compute-1 sudo[126239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrjpsbgafrgzqtqvqarycbodaetiplkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089492.3111677-1143-69644462576999/AnsiballZ_file.py'
Jan 22 13:44:53 compute-1 sudo[126239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:53 compute-1 python3.9[126241]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:53 compute-1 sudo[126239]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:53 compute-1 sudo[126391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcrrluexdpqxkerahmvalkuexpmawko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089493.6700928-1179-76328155709438/AnsiballZ_stat.py'
Jan 22 13:44:53 compute-1 sudo[126391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:54 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:54 compute-1 python3.9[126393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:54 compute-1 sudo[126391]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:54 compute-1 sudo[126469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbfcbvbyuituvizqivfvtaovpclfklcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089493.6700928-1179-76328155709438/AnsiballZ_file.py'
Jan 22 13:44:54 compute-1 sudo[126469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:54 compute-1 python3.9[126471]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:54 compute-1 sudo[126469]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:55 compute-1 ceph-mon[81715]: pgmap v495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:55 compute-1 sudo[126621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauwtojkocnsinvbpxcvjlbrbqmkqfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089495.018304-1215-154935447482141/AnsiballZ_systemd.py'
Jan 22 13:44:55 compute-1 sudo[126621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:55.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:55 compute-1 python3.9[126623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:44:55 compute-1 systemd[1]: Reloading.
Jan 22 13:44:55 compute-1 systemd-rc-local-generator[126653]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:44:55 compute-1 systemd-sysv-generator[126656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:57 compute-1 sudo[126621]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:57 compute-1 sudo[126812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutklenbspnutmfiqmhyjakvehifiyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089497.456464-1239-52748478152607/AnsiballZ_stat.py'
Jan 22 13:44:57 compute-1 sudo[126812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:57 compute-1 python3.9[126814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:57 compute-1 sudo[126812]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:58 compute-1 sudo[126890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohcmrjqryzzotxtutnaljebvlsbmhwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089497.456464-1239-52748478152607/AnsiballZ_file.py'
Jan 22 13:44:58 compute-1 sudo[126890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:58 compute-1 python3.9[126892]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:58 compute-1 sudo[126890]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:58 compute-1 ceph-mon[81715]: pgmap v496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:58 compute-1 sudo[127042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zusstjqqsetgxgyfdzmgquzjnuqbssyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089498.6734989-1275-38488000033128/AnsiballZ_stat.py'
Jan 22 13:44:58 compute-1 sudo[127042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:59 compute-1 python3.9[127044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:59 compute-1 sudo[127042]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:59 compute-1 sudo[127120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uceumuhjtgxazlodugildzfrbutcecmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089498.6734989-1275-38488000033128/AnsiballZ_file.py'
Jan 22 13:44:59 compute-1 sudo[127120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:59 compute-1 python3.9[127122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:59 compute-1 sudo[127120]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:59 compute-1 ceph-mon[81715]: pgmap v497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:00 compute-1 sudo[127272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqghgikrzttcybediqjsfaampeylsgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089499.9618912-1311-48903145750513/AnsiballZ_systemd.py'
Jan 22 13:45:00 compute-1 sudo[127272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:00 compute-1 python3.9[127274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:45:00 compute-1 systemd[1]: Reloading.
Jan 22 13:45:00 compute-1 systemd-rc-local-generator[127305]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:00 compute-1 systemd-sysv-generator[127308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:00 compute-1 systemd[1]: Starting Create netns directory...
Jan 22 13:45:01 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:45:01 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:45:01 compute-1 systemd[1]: Finished Create netns directory.
Jan 22 13:45:01 compute-1 sudo[127272]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:01 compute-1 ceph-mon[81715]: pgmap v498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:01 compute-1 sudo[127466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxhlnppfmrxpqyqlqydahbxucglrytj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089501.4743588-1341-157278661719227/AnsiballZ_file.py'
Jan 22 13:45:01 compute-1 sudo[127466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:02 compute-1 python3.9[127468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:02 compute-1 sudo[127466]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:02 compute-1 sudo[127618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbfzgkixboikbbygmfnkvdzstvetquqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089502.4659057-1365-191987629783071/AnsiballZ_stat.py'
Jan 22 13:45:02 compute-1 sudo[127618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:03 compute-1 python3.9[127620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:03 compute-1 sudo[127618]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:03 compute-1 sudo[127741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drefevkkunqllmcummyomigkirzasvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089502.4659057-1365-191987629783071/AnsiballZ_copy.py'
Jan 22 13:45:03 compute-1 sudo[127741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:03 compute-1 python3.9[127743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089502.4659057-1365-191987629783071/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:03 compute-1 ceph-mon[81715]: pgmap v499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:03 compute-1 sudo[127741]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:04 compute-1 sudo[127893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhyiosounshfypaocbchmydxvviwncya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089504.2486951-1416-207796524677099/AnsiballZ_file.py'
Jan 22 13:45:04 compute-1 sudo[127893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:04 compute-1 python3.9[127895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:04 compute-1 sudo[127893]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:05 compute-1 sudo[128045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcydeiwtpqwenehgkvodxwhtuwbzriyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.0610185-1440-211777154756699/AnsiballZ_file.py'
Jan 22 13:45:05 compute-1 sudo[128045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:05 compute-1 python3.9[128047]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:05 compute-1 sudo[128045]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:05 compute-1 ceph-mon[81715]: pgmap v500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:06 compute-1 sudo[128197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crdmaozssuwdtkgqhutngxdqbcfqwpae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.934543-1464-156752273844533/AnsiballZ_stat.py'
Jan 22 13:45:06 compute-1 sudo[128197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:06 compute-1 python3.9[128199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:06 compute-1 sudo[128197]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:06 compute-1 sudo[128320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobtbjpdrgvgwcuisqohwunrgmjcqqvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.934543-1464-156752273844533/AnsiballZ_copy.py'
Jan 22 13:45:06 compute-1 sudo[128320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:07 compute-1 python3.9[128322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089505.934543-1464-156752273844533/.source.json _original_basename=.sa1r0ghs follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:07 compute-1 sudo[128320]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:07 compute-1 ceph-mon[81715]: pgmap v501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.627597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507627697, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2355, "num_deletes": 251, "total_data_size": 4759955, "memory_usage": 4808176, "flush_reason": "Manual Compaction"}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507647596, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3097044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10264, "largest_seqno": 12614, "table_properties": {"data_size": 3088227, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21978, "raw_average_key_size": 20, "raw_value_size": 3068799, "raw_average_value_size": 2919, "num_data_blocks": 220, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089343, "oldest_key_time": 1769089343, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20049 microseconds, and 9012 cpu microseconds.
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.647645) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3097044 bytes OK
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.647686) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649370) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649382) EVENT_LOG_v1 {"time_micros": 1769089507649378, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4749195, prev total WAL file size 4749195, number of live WAL files 2.
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650598) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3024KB)], [21(7707KB)]
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507650699, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10989184, "oldest_snapshot_seqno": -1}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4558 keys, 8311847 bytes, temperature: kUnknown
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507704752, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8311847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8280006, "index_size": 19315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 112589, "raw_average_key_size": 24, "raw_value_size": 8195986, "raw_average_value_size": 1798, "num_data_blocks": 819, "num_entries": 4558, "num_filter_entries": 4558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.705083) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8311847 bytes
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.706817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 5077, records dropped: 519 output_compression: NoCompression
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.706839) EVENT_LOG_v1 {"time_micros": 1769089507706827, "job": 10, "event": "compaction_finished", "compaction_time_micros": 54160, "compaction_time_cpu_micros": 21605, "output_level": 6, "num_output_files": 1, "total_output_size": 8311847, "num_input_records": 5077, "num_output_records": 4558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507707570, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507709533, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-1 python3.9[128472]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:08 compute-1 ceph-mon[81715]: pgmap v502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:10 compute-1 sudo[128893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxxogzuaorwfihzeqqairntpajlcdnhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089509.8309796-1584-14881578652488/AnsiballZ_container_config_data.py'
Jan 22 13:45:10 compute-1 sudo[128893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:10 compute-1 python3.9[128895]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 13:45:10 compute-1 sudo[128893]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:10 compute-1 ceph-mon[81715]: pgmap v503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:12 compute-1 sudo[129045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvfescbldbeycgpbixkgatyluprhbgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089511.4696996-1617-46956617785491/AnsiballZ_container_config_hash.py'
Jan 22 13:45:12 compute-1 sudo[129045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:12 compute-1 python3.9[129047]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:45:12 compute-1 sudo[129045]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:12 compute-1 ceph-mon[81715]: pgmap v504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:12 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:13 compute-1 sudo[129197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deblhfkyymrnsrwtroufebtxifhvvvbp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089512.9730403-1647-39953024319496/AnsiballZ_edpm_container_manage.py'
Jan 22 13:45:13 compute-1 sudo[129197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:14 compute-1 python3[129199]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:45:14 compute-1 ceph-mon[81715]: pgmap v505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:15.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:16 compute-1 ceph-mon[81715]: pgmap v506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:19 compute-1 podman[129212]: 2026-01-22 13:45:19.116774607 +0000 UTC m=+4.899361579 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:19 compute-1 podman[129331]: 2026-01-22 13:45:19.26073347 +0000 UTC m=+0.055324800 container create 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 22 13:45:19 compute-1 podman[129331]: 2026-01-22 13:45:19.227831767 +0000 UTC m=+0.022423117 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:19 compute-1 python3[129199]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:19 compute-1 sudo[129197]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:19 compute-1 ceph-mon[81715]: pgmap v507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:19 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-1 ceph-mon[81715]: pgmap v508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-1 sudo[129519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjbvdpvypahiwwunstnfgccrsyqsylta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089520.5659912-1671-180610347472741/AnsiballZ_stat.py'
Jan 22 13:45:20 compute-1 sudo[129519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:21 compute-1 python3.9[129521]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:45:21 compute-1 sudo[129519]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:22 compute-1 sudo[129673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqbjmtvvncpyjhzdspuncpzjaalkhev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089522.518181-1698-113857842744491/AnsiballZ_file.py'
Jan 22 13:45:22 compute-1 sudo[129673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:22 compute-1 ceph-mon[81715]: pgmap v509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:22 compute-1 python3.9[129675]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:22 compute-1 sudo[129673]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:23 compute-1 sudo[129749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkwqipiifsaaatysfwferstbtffoipsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089522.518181-1698-113857842744491/AnsiballZ_stat.py'
Jan 22 13:45:23 compute-1 sudo[129749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:23 compute-1 python3.9[129751]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:45:23 compute-1 sudo[129749]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:24 compute-1 sudo[129900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbfowmhutcqwzxbecopywcgtoevsnnjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5244226-1698-85458426902583/AnsiballZ_copy.py'
Jan 22 13:45:24 compute-1 sudo[129900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:24 compute-1 python3.9[129902]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089523.5244226-1698-85458426902583/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:24 compute-1 sudo[129900]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:25 compute-1 sudo[129976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxlmdlzvxatfjzzuzbbsqvwmxjljyxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5244226-1698-85458426902583/AnsiballZ_systemd.py'
Jan 22 13:45:25 compute-1 sudo[129976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:25 compute-1 python3.9[129978]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:45:25 compute-1 systemd[1]: Reloading.
Jan 22 13:45:25 compute-1 ceph-mon[81715]: pgmap v510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:25 compute-1 systemd-rc-local-generator[130006]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:25 compute-1 systemd-sysv-generator[130009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:25 compute-1 sudo[129976]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:26 compute-1 sudo[130087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzlupiaueuajglaiblkahasyeggywrqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5244226-1698-85458426902583/AnsiballZ_systemd.py'
Jan 22 13:45:26 compute-1 sudo[130087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:26 compute-1 python3.9[130089]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:45:26 compute-1 systemd[1]: Reloading.
Jan 22 13:45:26 compute-1 systemd-sysv-generator[130120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:26 compute-1 systemd-rc-local-generator[130116]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:26 compute-1 systemd[1]: Starting ovn_controller container...
Jan 22 13:45:26 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12dd6c5e4c9b17a9594d6d4a4b5c6490265d8b0ad3b98c5fc37508ca98ce00b3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 13:45:26 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536.
Jan 22 13:45:26 compute-1 podman[130129]: 2026-01-22 13:45:26.850215344 +0000 UTC m=+0.134718391 container init 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 13:45:26 compute-1 ovn_controller[130144]: + sudo -E kolla_set_configs
Jan 22 13:45:26 compute-1 podman[130129]: 2026-01-22 13:45:26.871332874 +0000 UTC m=+0.155835901 container start 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 13:45:26 compute-1 edpm-start-podman-container[130129]: ovn_controller
Jan 22 13:45:26 compute-1 systemd[1]: Created slice User Slice of UID 0.
Jan 22 13:45:26 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 13:45:26 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 13:45:26 compute-1 edpm-start-podman-container[130128]: Creating additional drop-in dependency for "ovn_controller" (89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536)
Jan 22 13:45:26 compute-1 systemd[1]: Starting User Manager for UID 0...
Jan 22 13:45:26 compute-1 podman[130150]: 2026-01-22 13:45:26.944224316 +0000 UTC m=+0.060733529 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:45:26 compute-1 systemd[1]: 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536-1384928aa75b3952.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 13:45:26 compute-1 systemd[1]: 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536-1384928aa75b3952.service: Failed with result 'exit-code'.
Jan 22 13:45:26 compute-1 systemd[130185]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 22 13:45:26 compute-1 systemd[1]: Reloading.
Jan 22 13:45:27 compute-1 systemd-rc-local-generator[130230]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:27 compute-1 systemd-sysv-generator[130234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:27 compute-1 systemd[130185]: Queued start job for default target Main User Target.
Jan 22 13:45:27 compute-1 systemd[130185]: Created slice User Application Slice.
Jan 22 13:45:27 compute-1 systemd[130185]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 13:45:27 compute-1 systemd[130185]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 13:45:27 compute-1 systemd[130185]: Reached target Paths.
Jan 22 13:45:27 compute-1 systemd[130185]: Reached target Timers.
Jan 22 13:45:27 compute-1 systemd[130185]: Starting D-Bus User Message Bus Socket...
Jan 22 13:45:27 compute-1 systemd[130185]: Starting Create User's Volatile Files and Directories...
Jan 22 13:45:27 compute-1 systemd[130185]: Listening on D-Bus User Message Bus Socket.
Jan 22 13:45:27 compute-1 systemd[130185]: Finished Create User's Volatile Files and Directories.
Jan 22 13:45:27 compute-1 systemd[130185]: Reached target Sockets.
Jan 22 13:45:27 compute-1 systemd[130185]: Reached target Basic System.
Jan 22 13:45:27 compute-1 systemd[130185]: Reached target Main User Target.
Jan 22 13:45:27 compute-1 systemd[130185]: Startup finished in 124ms.
Jan 22 13:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:27 compute-1 systemd[1]: Started User Manager for UID 0.
Jan 22 13:45:27 compute-1 systemd[1]: Started ovn_controller container.
Jan 22 13:45:27 compute-1 systemd[1]: Started Session c1 of User root.
Jan 22 13:45:27 compute-1 sudo[130087]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:27 compute-1 ovn_controller[130144]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:45:27 compute-1 ovn_controller[130144]: INFO:__main__:Validating config file
Jan 22 13:45:27 compute-1 ovn_controller[130144]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:45:27 compute-1 ovn_controller[130144]: INFO:__main__:Writing out command to execute
Jan 22 13:45:27 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: ++ cat /run_command
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + ARGS=
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + sudo kolla_copy_cacerts
Jan 22 13:45:27 compute-1 systemd[1]: Started Session c2 of User root.
Jan 22 13:45:27 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + [[ ! -n '' ]]
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + . kolla_extend_start
Jan 22 13:45:27 compute-1 ovn_controller[130144]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + umask 0022
Jan 22 13:45:27 compute-1 ovn_controller[130144]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 13:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:27.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 13:45:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.4956] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.4971] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:45:27 compute-1 kernel: br-int: entered promiscuous mode
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <warn>  [1769089527.4976] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:45:27 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.4991] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.4998] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.5004] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:45:27 compute-1 systemd-udevd[130277]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:27 compute-1 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.6449] manager: (ovn-d9fd1e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 13:45:27 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.6675] device (genev_sys_6081): carrier: link connected
Jan 22 13:45:27 compute-1 NetworkManager[48926]: <info>  [1769089527.6678] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 13:45:28 compute-1 NetworkManager[48926]: <info>  [1769089528.0352] manager: (ovn-c4fa18-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 13:45:28 compute-1 ceph-mon[81715]: pgmap v511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:28 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:28 compute-1 NetworkManager[48926]: <info>  [1769089528.5614] manager: (ovn-7335e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 13:45:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:29 compute-1 python3.9[130407]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 13:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:29 compute-1 ceph-mon[81715]: pgmap v512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:30 compute-1 sudo[130558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmlnzlkiwuliaohojdwtulnecyyniwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089530.0557806-1833-76434921303399/AnsiballZ_stat.py'
Jan 22 13:45:30 compute-1 sudo[130558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:30 compute-1 python3.9[130560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:30 compute-1 sudo[130558]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:30 compute-1 sudo[130681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrslpsjyahtmuenpahoetcfgtynskvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089530.0557806-1833-76434921303399/AnsiballZ_copy.py'
Jan 22 13:45:30 compute-1 sudo[130681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:31 compute-1 python3.9[130683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089530.0557806-1833-76434921303399/.source.yaml _original_basename=.3wxv79t1 follow=False checksum=46f66c8a157c96fcb7cc69848fe925e114c66b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:31 compute-1 sudo[130681]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:31 compute-1 ceph-mon[81715]: pgmap v513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2008 writes, 12K keys, 2008 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2008 writes, 2008 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2008 writes, 12K keys, 2008 commit groups, 1.0 writes per commit group, ingest: 23.79 MB, 0.04 MB/s
                                           Interval WAL: 2008 writes, 2008 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     54.2      0.27              0.04         5    0.054       0      0       0.0       0.0
                                             L6      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    163.0    135.8      0.25              0.09         4    0.062     18K   1808       0.0       0.0
                                            Sum      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     78.0     93.2      0.52              0.12         9    0.058     18K   1808       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     78.3     93.6      0.52              0.12         8    0.065     18K   1808       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    163.0    135.8      0.25              0.09         4    0.062     18K   1808       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.6      0.27              0.04         4    0.067       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.014
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 1.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(62,1.13 MB,0.37106%) FilterBlock(9,59.98 KB,0.0192692%) IndexBlock(9,116.08 KB,0.0372887%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:45:31 compute-1 sudo[130833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjgtsjganqnhrvnebhqvjqdivuhcsdlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089531.6245873-1878-218982488126935/AnsiballZ_command.py'
Jan 22 13:45:31 compute-1 sudo[130833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:32 compute-1 python3.9[130835]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:32 compute-1 ovs-vsctl[130836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 13:45:32 compute-1 sudo[130833]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:32 compute-1 sudo[130986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqfrcnczwdcpitrjexndovtvztqrllr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089532.4619389-1902-140774979155253/AnsiballZ_command.py'
Jan 22 13:45:32 compute-1 sudo[130986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:32 compute-1 python3.9[130988]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:32 compute-1 ovs-vsctl[130990]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 13:45:32 compute-1 sudo[130986]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:33.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:33 compute-1 ceph-mon[81715]: pgmap v514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:34 compute-1 sudo[131141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itcljbgqrjtmflozjmjmpkjpcjnbnloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089533.7624745-1944-258068283977636/AnsiballZ_command.py'
Jan 22 13:45:34 compute-1 sudo[131141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:34 compute-1 python3.9[131143]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:34 compute-1 ovs-vsctl[131144]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 13:45:34 compute-1 sudo[131141]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:34 compute-1 ceph-mon[81715]: pgmap v515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:34 compute-1 sshd-session[119269]: Connection closed by 192.168.122.30 port 54118
Jan 22 13:45:34 compute-1 sshd-session[119266]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:45:34 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 13:45:34 compute-1 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Jan 22 13:45:34 compute-1 systemd[1]: session-45.scope: Consumed 58.908s CPU time.
Jan 22 13:45:34 compute-1 systemd-logind[787]: Removed session 45.
Jan 22 13:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:35.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:37 compute-1 ceph-mon[81715]: pgmap v516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:37 compute-1 systemd[1]: Stopping User Manager for UID 0...
Jan 22 13:45:37 compute-1 systemd[130185]: Activating special unit Exit the Session...
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped target Main User Target.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped target Basic System.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped target Paths.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped target Sockets.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped target Timers.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 13:45:37 compute-1 systemd[130185]: Closed D-Bus User Message Bus Socket.
Jan 22 13:45:37 compute-1 systemd[130185]: Stopped Create User's Volatile Files and Directories.
Jan 22 13:45:37 compute-1 systemd[130185]: Removed slice User Application Slice.
Jan 22 13:45:37 compute-1 systemd[130185]: Reached target Shutdown.
Jan 22 13:45:37 compute-1 systemd[130185]: Finished Exit the Session.
Jan 22 13:45:37 compute-1 systemd[130185]: Reached target Exit the Session.
Jan 22 13:45:37 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 13:45:37 compute-1 systemd[1]: Stopped User Manager for UID 0.
Jan 22 13:45:37 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 13:45:37 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 13:45:37 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 13:45:37 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 13:45:37 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 13:45:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:39 compute-1 ceph-mon[81715]: pgmap v517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:40 compute-1 sshd-session[131173]: Accepted publickey for zuul from 192.168.122.30 port 57542 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:45:40 compute-1 systemd-logind[787]: New session 47 of user zuul.
Jan 22 13:45:40 compute-1 systemd[1]: Started Session 47 of User zuul.
Jan 22 13:45:40 compute-1 sshd-session[131173]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:45:40 compute-1 sudo[131229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:40 compute-1 sudo[131229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-1 sudo[131229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-1 sudo[131254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:45:40 compute-1 sudo[131254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-1 sudo[131254]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-1 sudo[131299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:40 compute-1 sudo[131299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-1 sudo[131299]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-1 sudo[131336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:45:40 compute-1 sudo[131336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:41 compute-1 sudo[131336]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:41 compute-1 python3.9[131439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:45:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-1 sudo[131615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqvvmkchsntidcskwgfczjbwxuhwxnkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089542.2039678-64-206408606412895/AnsiballZ_file.py'
Jan 22 13:45:42 compute-1 sudo[131615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:42 compute-1 ceph-mon[81715]: pgmap v518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 13:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 13:45:42 compute-1 ceph-mon[81715]: pgmap v519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 13:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:42 compute-1 python3.9[131617]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:42 compute-1 sudo[131615]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:43 compute-1 sudo[131767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfhrrxdteohjazoqtihsjjxljmqtoajm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089543.0644634-64-248233235345615/AnsiballZ_file.py'
Jan 22 13:45:43 compute-1 sudo[131767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:43 compute-1 python3.9[131769]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:43 compute-1 sudo[131767]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:44 compute-1 sudo[131919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwomyhyixjuchxujukqaiyumkiiuamm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089543.7635431-64-70034090522011/AnsiballZ_file.py'
Jan 22 13:45:44 compute-1 sudo[131919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:44 compute-1 python3.9[131921]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:44 compute-1 sudo[131919]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-1 sudo[132071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkidslgcezoliciqdoufiqrahbmfwcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089544.5692232-64-226870129248673/AnsiballZ_file.py'
Jan 22 13:45:44 compute-1 sudo[132071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:45 compute-1 python3.9[132073]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:45 compute-1 sudo[132071]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:45 compute-1 sudo[132223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grrpskfmzgorfrjjpmzfbwtnrxfzjdhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089545.2687006-64-210805263945436/AnsiballZ_file.py'
Jan 22 13:45:45 compute-1 sudo[132223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:45 compute-1 python3.9[132225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:45 compute-1 sudo[132223]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:46 compute-1 ceph-mon[81715]: pgmap v520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:45:46 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:45:46 compute-1 python3.9[132376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:47.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:45:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:45:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:45:47 compute-1 ceph-mon[81715]: pgmap v521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:47 compute-1 sudo[132526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddryilisyyhdzhpgnyoarpkopxusnlan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089547.1424212-196-125873802453314/AnsiballZ_seboolean.py'
Jan 22 13:45:47 compute-1 sudo[132526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:47 compute-1 python3.9[132528]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 13:45:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:48 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:48 compute-1 sudo[132526]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:49.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:49 compute-1 ceph-mon[81715]: pgmap v522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:49 compute-1 python3.9[132678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:50 compute-1 python3.9[132799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089548.7459276-220-169547906417519/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:50 compute-1 python3.9[132949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:51 compute-1 python3.9[133070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089550.4398472-265-262474158336582/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:51 compute-1 ceph-mon[81715]: pgmap v523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-1 sudo[133220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qulyesiodqviijjwmzkonmvptvvggwec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089552.0388424-316-48244486773450/AnsiballZ_setup.py'
Jan 22 13:45:52 compute-1 sudo[133220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:52 compute-1 sudo[133223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:52 compute-1 sudo[133223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:52 compute-1 sudo[133223]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:52 compute-1 sudo[133248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:45:52 compute-1 sudo[133248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:52 compute-1 ceph-mon[81715]: pgmap v524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:52 compute-1 sudo[133248]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:52 compute-1 python3.9[133222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:45:52 compute-1 sudo[133220]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:53.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:53 compute-1 sudo[133354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwbqnudtqwgyhwqffaekkthurpbkbme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089552.0388424-316-48244486773450/AnsiballZ_dnf.py'
Jan 22 13:45:53 compute-1 sudo[133354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:53 compute-1 python3.9[133356]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:45:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:54 compute-1 ceph-mon[81715]: pgmap v525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:55 compute-1 sudo[133354]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:55.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:55 compute-1 sudo[133507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqzckbxjlljvmiwxonftowzphmvkcgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089555.384147-352-197646649305903/AnsiballZ_systemd.py'
Jan 22 13:45:55 compute-1 sudo[133507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:56 compute-1 python3.9[133509]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:45:56 compute-1 sudo[133507]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:56 compute-1 ceph-mon[81715]: pgmap v526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:57 compute-1 ovn_controller[130144]: 2026-01-22T13:45:57Z|00025|memory|INFO|16512 kB peak resident set size after 29.7 seconds
Jan 22 13:45:57 compute-1 ovn_controller[130144]: 2026-01-22T13:45:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 13:45:57 compute-1 podman[133663]: 2026-01-22 13:45:57.1260599 +0000 UTC m=+0.106610106 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 13:45:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:57 compute-1 python3.9[133662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:57 compute-1 python3.9[133807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089556.6324115-376-121923078544855/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:58 compute-1 ceph-mon[81715]: pgmap v527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:59 compute-1 python3.9[133957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:59.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:45:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:59 compute-1 python3.9[134078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089558.6265862-376-58653762888516/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:00 compute-1 ceph-mon[81715]: pgmap v528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:01.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:01 compute-1 python3.9[134228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:01 compute-1 python3.9[134349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089560.8851347-508-265462852637144/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:02 compute-1 python3.9[134499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:02 compute-1 ceph-mon[81715]: pgmap v529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:03.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:03 compute-1 python3.9[134620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089562.2240312-508-245587857674355/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:04 compute-1 python3.9[134770]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:04 compute-1 ceph-mon[81715]: pgmap v530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:04 compute-1 sudo[134922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsydttmqgyfgldptzuilbatqozvksrij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089564.62964-622-87864521218852/AnsiballZ_file.py'
Jan 22 13:46:04 compute-1 sudo[134922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:05 compute-1 python3.9[134924]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:05 compute-1 sudo[134922]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:05.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:06 compute-1 sudo[135074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnhunfzkjkgsqqawrntjgkjxiohrvvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089565.7206736-647-98137141009423/AnsiballZ_stat.py'
Jan 22 13:46:06 compute-1 sudo[135074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:06 compute-1 python3.9[135076]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:06 compute-1 sudo[135074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:06 compute-1 sudo[135152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvyupyzyimqnuazumuyfpybapjyqlke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089565.7206736-647-98137141009423/AnsiballZ_file.py'
Jan 22 13:46:06 compute-1 sudo[135152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:06 compute-1 python3.9[135154]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:06 compute-1 sudo[135152]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:06 compute-1 ceph-mon[81715]: pgmap v531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:07.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:07 compute-1 sudo[135304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krycryagsfbzuuytmdlufemljbzhyntm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089566.9020164-647-233559636384409/AnsiballZ_stat.py'
Jan 22 13:46:07 compute-1 sudo[135304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:07 compute-1 python3.9[135306]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:07 compute-1 sudo[135304]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:07 compute-1 sudo[135382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siqhhvtamfllxevmikvvxpihyhuodafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089566.9020164-647-233559636384409/AnsiballZ_file.py'
Jan 22 13:46:07 compute-1 sudo[135382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:07 compute-1 python3.9[135384]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:07 compute-1 sudo[135382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:07 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:08 compute-1 sudo[135534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxheglwtmdnysmnijxkmeerdmhhviddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089568.430305-715-47115894033135/AnsiballZ_file.py'
Jan 22 13:46:08 compute-1 sudo[135534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:08 compute-1 python3.9[135536]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:08 compute-1 sudo[135534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:08 compute-1 ceph-mon[81715]: pgmap v532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:09.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:09 compute-1 sudo[135686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcrusjsmrvcfbafxdehcreumobacopqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089569.329295-739-146586578854587/AnsiballZ_stat.py'
Jan 22 13:46:09 compute-1 sudo[135686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:09 compute-1 python3.9[135688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:09 compute-1 sudo[135686]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:10 compute-1 sudo[135764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlndievnlebylgppsqncraiyyaxzjywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089569.329295-739-146586578854587/AnsiballZ_file.py'
Jan 22 13:46:10 compute-1 sudo[135764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:10 compute-1 python3.9[135766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:10 compute-1 sudo[135764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:11 compute-1 ceph-mon[81715]: pgmap v533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:11 compute-1 sudo[135916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynmsbanahvhfoggjrlmkqbhppusjgdwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089570.8184419-775-171780467834005/AnsiballZ_stat.py'
Jan 22 13:46:11 compute-1 sudo[135916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:11 compute-1 python3.9[135918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:11 compute-1 sudo[135916]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:11 compute-1 sudo[135994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjfycpjywuupyhjwthkvvocefgetmyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089570.8184419-775-171780467834005/AnsiballZ_file.py'
Jan 22 13:46:11 compute-1 sudo[135994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:11 compute-1 python3.9[135996]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:11 compute-1 sudo[135994]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:12 compute-1 sudo[136146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsivdjaedvpeearswlbtcmaujzzatlxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089572.245773-811-47942407429988/AnsiballZ_systemd.py'
Jan 22 13:46:12 compute-1 sudo[136146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:13 compute-1 python3.9[136148]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:13 compute-1 systemd[1]: Reloading.
Jan 22 13:46:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:13 compute-1 systemd-rc-local-generator[136176]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:13 compute-1 systemd-sysv-generator[136179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:13 compute-1 sudo[136146]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:13 compute-1 ceph-mon[81715]: pgmap v534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:13 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:14 compute-1 sudo[136335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppqwawczmmatjkhacuvfblbdtehsksbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089573.7401862-835-201865286252885/AnsiballZ_stat.py'
Jan 22 13:46:14 compute-1 sudo[136335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:14 compute-1 python3.9[136337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:14 compute-1 sudo[136335]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:14 compute-1 sudo[136413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmqgwoonhtghivodrqfhiutspjkfmpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089573.7401862-835-201865286252885/AnsiballZ_file.py'
Jan 22 13:46:14 compute-1 sudo[136413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:14 compute-1 python3.9[136415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:14 compute-1 sudo[136413]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:15 compute-1 ceph-mon[81715]: pgmap v535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:15 compute-1 sudo[136565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rizhvgqqycdkemkonuotxtumteyiwqra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089575.168778-871-96850906261651/AnsiballZ_stat.py'
Jan 22 13:46:15 compute-1 sudo[136565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:15 compute-1 python3.9[136567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:15 compute-1 sudo[136565]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:15 compute-1 sudo[136643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltylmaledfjzbphladubxyquoimxnpmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089575.168778-871-96850906261651/AnsiballZ_file.py'
Jan 22 13:46:15 compute-1 sudo[136643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:16 compute-1 python3.9[136645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:16 compute-1 sudo[136643]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:16 compute-1 sudo[136795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhrfzaxksghroordarjrtlcqyiykufsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089576.5023005-907-171533566516531/AnsiballZ_systemd.py'
Jan 22 13:46:16 compute-1 sudo[136795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:17 compute-1 python3.9[136797]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:17 compute-1 systemd[1]: Reloading.
Jan 22 13:46:17 compute-1 systemd-rc-local-generator[136823]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:17 compute-1 systemd-sysv-generator[136829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:17 compute-1 systemd[1]: Starting Create netns directory...
Jan 22 13:46:17 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:46:17 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:46:17 compute-1 systemd[1]: Finished Create netns directory.
Jan 22 13:46:17 compute-1 sudo[136795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:18 compute-1 ceph-mon[81715]: pgmap v536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:18 compute-1 sudo[136990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgarlhmdardqrxkhddcicbeqvpklkide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.065684-937-97629834304834/AnsiballZ_file.py'
Jan 22 13:46:18 compute-1 sudo[136990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:18 compute-1 python3.9[136992]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:18 compute-1 sudo[136990]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:19 compute-1 ceph-mon[81715]: pgmap v537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:19 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:19 compute-1 sudo[137142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufzxqzqdxnvkfhbssdukaxnsmhdykoyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.8318424-961-72795933302664/AnsiballZ_stat.py'
Jan 22 13:46:19 compute-1 sudo[137142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:19 compute-1 python3.9[137144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:19 compute-1 sudo[137142]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:19 compute-1 sudo[137266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsvahhwmopybuzqsigueoblwfogozeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.8318424-961-72795933302664/AnsiballZ_copy.py'
Jan 22 13:46:19 compute-1 sudo[137266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:19 compute-1 python3.9[137268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089578.8318424-961-72795933302664/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:19 compute-1 sudo[137266]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:20 compute-1 sudo[137418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezvcdzmdbsyrufpdkbqlmhuoicdvtdxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089580.5931044-1012-68194762165702/AnsiballZ_file.py'
Jan 22 13:46:20 compute-1 sudo[137418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:21 compute-1 python3.9[137420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:21 compute-1 sudo[137418]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:21 compute-1 ceph-mon[81715]: pgmap v538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:21 compute-1 sudo[137570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgcyrmverodaifnjuaxnmiicyesiqdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089581.4190307-1036-37808997865568/AnsiballZ_file.py'
Jan 22 13:46:21 compute-1 sudo[137570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:21 compute-1 python3.9[137572]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:21 compute-1 sudo[137570]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:22 compute-1 sudo[137722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzuioxgdisghetgwxnlsvqhbvlzsxyaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089582.1922517-1060-166932575519149/AnsiballZ_stat.py'
Jan 22 13:46:22 compute-1 sudo[137722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:22 compute-1 python3.9[137724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:22 compute-1 sudo[137722]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:23 compute-1 sudo[137845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfcxlhkyfkilarwutnxgbgoqyxyxcbda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089582.1922517-1060-166932575519149/AnsiballZ_copy.py'
Jan 22 13:46:23 compute-1 sudo[137845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:23 compute-1 ceph-mon[81715]: pgmap v539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:23 compute-1 python3.9[137847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089582.1922517-1060-166932575519149/.source.json _original_basename=.mc849uot follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:23 compute-1 sudo[137845]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:23.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:24 compute-1 python3.9[137997]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:25 compute-1 ceph-mon[81715]: pgmap v540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:26 compute-1 sudo[138418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqmmmaxwnddpefnmpvjffumurpjhoeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089585.8374784-1180-114878769709434/AnsiballZ_container_config_data.py'
Jan 22 13:46:26 compute-1 sudo[138418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:26 compute-1 python3.9[138420]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 13:46:26 compute-1 sudo[138418]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:27 compute-1 ceph-mon[81715]: pgmap v541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:27 compute-1 sudo[138583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfifuejtrlpxqgmfpghwpjkcruacfoms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089587.046254-1213-175729437321888/AnsiballZ_container_config_hash.py'
Jan 22 13:46:27 compute-1 sudo[138583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:27 compute-1 podman[138544]: 2026-01-22 13:46:27.559311579 +0000 UTC m=+0.085581345 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:46:27 compute-1 python3.9[138591]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:46:27 compute-1 sudo[138583]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:28 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:28 compute-1 sudo[138748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lggqvzpsnggquwuyiwmredvsltwgrkxn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089588.193221-1243-221317430778410/AnsiballZ_edpm_container_manage.py'
Jan 22 13:46:28 compute-1 sudo[138748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:29 compute-1 python3[138750]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:46:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:29 compute-1 ceph-mon[81715]: pgmap v542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:31 compute-1 ceph-mon[81715]: pgmap v543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:33.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:33 compute-1 ceph-mon[81715]: pgmap v544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:35 compute-1 ceph-mon[81715]: pgmap v545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-1 ceph-mon[81715]: pgmap v546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:38 compute-1 podman[138762]: 2026-01-22 13:46:38.966347258 +0000 UTC m=+9.785166721 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:39 compute-1 podman[138895]: 2026-01-22 13:46:39.144749798 +0000 UTC m=+0.058500888 container create 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 13:46:39 compute-1 podman[138895]: 2026-01-22 13:46:39.115031666 +0000 UTC m=+0.028782766 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:39 compute-1 python3[138750]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:39 compute-1 sudo[138748]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:39 compute-1 ceph-mon[81715]: pgmap v547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:39 compute-1 sudo[139083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhhvertcgojcinwxwogwuieuyjdgira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089599.491267-1267-265708384528532/AnsiballZ_stat.py'
Jan 22 13:46:39 compute-1 sudo[139083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:39 compute-1 python3.9[139085]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:39 compute-1 sudo[139083]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:40 compute-1 sudo[139237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pneuloqqfxwwlmmpvkuxavhlomnjtqnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089600.4580622-1294-257570167559845/AnsiballZ_file.py'
Jan 22 13:46:40 compute-1 sudo[139237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:41 compute-1 python3.9[139239]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:41 compute-1 sudo[139237]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:41 compute-1 sudo[139313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trfwiyeqlctvijovxdgvzmzslazvakhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089600.4580622-1294-257570167559845/AnsiballZ_stat.py'
Jan 22 13:46:41 compute-1 sudo[139313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:41 compute-1 python3.9[139315]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:41 compute-1 ceph-mon[81715]: pgmap v548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:41 compute-1 sudo[139313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:42 compute-1 sudo[139464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqowryagluohsgrkbhuqdkrtpukrxtdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.7177265-1294-96858669511986/AnsiballZ_copy.py'
Jan 22 13:46:42 compute-1 sudo[139464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:42 compute-1 python3.9[139466]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089601.7177265-1294-96858669511986/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:42 compute-1 sudo[139464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:42 compute-1 sudo[139540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqlwwynalolqkexztdyfdjqzdstcojrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.7177265-1294-96858669511986/AnsiballZ_systemd.py'
Jan 22 13:46:42 compute-1 sudo[139540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:42 compute-1 ceph-mon[81715]: pgmap v549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:42 compute-1 python3.9[139542]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:46:42 compute-1 systemd[1]: Reloading.
Jan 22 13:46:43 compute-1 systemd-sysv-generator[139574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:43 compute-1 systemd-rc-local-generator[139569]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:43 compute-1 sudo[139540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:43.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:43 compute-1 sudo[139651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnxogedlplkvhvxmzmtdzlcrmzvyjfta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.7177265-1294-96858669511986/AnsiballZ_systemd.py'
Jan 22 13:46:43 compute-1 sudo[139651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:44 compute-1 python3.9[139653]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:44 compute-1 systemd[1]: Reloading.
Jan 22 13:46:44 compute-1 systemd-rc-local-generator[139678]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:44 compute-1 systemd-sysv-generator[139681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:44 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 13:46:45 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:46:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1854a7059a530ec13fd336313dc43f22959daca98bb830b9b905c42edd9e391b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 13:46:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1854a7059a530ec13fd336313dc43f22959daca98bb830b9b905c42edd9e391b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 13:46:45 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69.
Jan 22 13:46:45 compute-1 podman[139694]: 2026-01-22 13:46:45.226412276 +0000 UTC m=+0.199053991 container init 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + sudo -E kolla_set_configs
Jan 22 13:46:45 compute-1 podman[139694]: 2026-01-22 13:46:45.265074794 +0000 UTC m=+0.237716509 container start 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 13:46:45 compute-1 edpm-start-podman-container[139694]: ovn_metadata_agent
Jan 22 13:46:45 compute-1 edpm-start-podman-container[139693]: Creating additional drop-in dependency for "ovn_metadata_agent" (49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69)
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Validating config file
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Copying service configuration files
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Writing out command to execute
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 13:46:45 compute-1 podman[139717]: 2026-01-22 13:46:45.33587984 +0000 UTC m=+0.058296862 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: ++ cat /run_command
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + CMD=neutron-ovn-metadata-agent
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + ARGS=
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + sudo kolla_copy_cacerts
Jan 22 13:46:45 compute-1 systemd[1]: Reloading.
Jan 22 13:46:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + [[ ! -n '' ]]
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + . kolla_extend_start
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + umask 0022
Jan 22 13:46:45 compute-1 ovn_metadata_agent[139710]: + exec neutron-ovn-metadata-agent
Jan 22 13:46:45 compute-1 systemd-rc-local-generator[139783]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:45 compute-1 systemd-sysv-generator[139789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:45 compute-1 ceph-mon[81715]: pgmap v550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:45 compute-1 systemd[1]: Started ovn_metadata_agent container.
Jan 22 13:46:45 compute-1 sudo[139651]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:46 compute-1 python3.9[139945]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 13:46:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:47.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 INFO neutron.common.config [-] Logging enabled!
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.428 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.428 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.444 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c803af81-5cf0-46ac-8f46-401e876a838c (UUID: c803af81-5cf0-46ac-8f46-401e876a838c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.468 139715 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.468 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.469 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.469 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.474 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.480 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.486 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c803af81-5cf0-46ac-8f46-401e876a838c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd8a7b85640>], external_ids={}, name=c803af81-5cf0-46ac-8f46-401e876a838c, nb_cfg_timestamp=1769089535619, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.487 139715 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd8a7b74f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.489 139715 INFO oslo_service.service [-] Starting 1 workers
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.493 139715 DEBUG oslo_service.service [-] Started child 139970 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.497 139715 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpcaxevftl/privsep.sock']
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.500 139970 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-427228'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.531 139970 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.532 139970 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.532 139970 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.536 139970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.542 139970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.550 139970 INFO eventlet.wsgi.server [-] (139970) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 22 13:46:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:47 compute-1 ceph-mon[81715]: pgmap v551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:47 compute-1 sudo[140100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhsmskwvasxcbexjejadnitfyaoxyhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089607.5571723-1429-237980037018307/AnsiballZ_stat.py'
Jan 22 13:46:47 compute-1 sudo[140100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:48 compute-1 python3.9[140102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:48 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 13:46:48 compute-1 sudo[140100]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.360 139715 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.361 139715 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcaxevftl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.137 140104 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.144 140104 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.147 140104 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.147 140104 INFO oslo.privsep.daemon [-] privsep daemon running as pid 140104
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.364 140104 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3e6009-09c6-446f-a39a-d2d40e66cdc2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 13:46:48 compute-1 sudo[140230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyoruuanxkngumcicvovbuudydicmjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089607.5571723-1429-237980037018307/AnsiballZ_copy.py'
Jan 22 13:46:48 compute-1 sudo[140230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:48 compute-1 ceph-mon[81715]: pgmap v552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:48 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:48 compute-1 python3.9[140232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089607.5571723-1429-237980037018307/.source.yaml _original_basename=.jeqc3w_n follow=False checksum=a7c93daf1344287e5303b3d1648c714a9349cb4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:48 compute-1 sudo[140230]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:46:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:46:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:49 compute-1 sshd-session[131176]: Connection closed by 192.168.122.30 port 57542
Jan 22 13:46:49 compute-1 sshd-session[131173]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:46:49 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Jan 22 13:46:49 compute-1 systemd[1]: session-47.scope: Consumed 57.881s CPU time.
Jan 22 13:46:49 compute-1 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Jan 22 13:46:49 compute-1 systemd-logind[787]: Removed session 47.
Jan 22 13:46:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.624 140104 DEBUG oslo.privsep.daemon [-] privsep: reply[6c89a9c8-e3c6-4f91-a9ea-9e42da5ef136]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.626 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, column=external_ids, values=({'neutron:ovn-metadata-id': '99503455-a922-596d-bbdf-dff82d80b62f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.636 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.642 139715 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:46:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:50 compute-1 ceph-mon[81715]: pgmap v553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:52 compute-1 sudo[140257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:52 compute-1 sudo[140257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:52 compute-1 sudo[140257]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:52 compute-1 sudo[140282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:46:52 compute-1 sudo[140282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:52 compute-1 sudo[140282]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:52 compute-1 ceph-mon[81715]: pgmap v554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:52 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:53 compute-1 sudo[140307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:53 compute-1 sudo[140307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:53 compute-1 sudo[140307]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:53 compute-1 sudo[140332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:46:53 compute-1 sudo[140332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:53 compute-1 sudo[140332]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:53.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:46:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:46:54 compute-1 sshd-session[140389]: Accepted publickey for zuul from 192.168.122.30 port 38382 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:46:54 compute-1 systemd-logind[787]: New session 48 of user zuul.
Jan 22 13:46:54 compute-1 systemd[1]: Started Session 48 of User zuul.
Jan 22 13:46:54 compute-1 sshd-session[140389]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:46:55 compute-1 ceph-mon[81715]: pgmap v555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:55.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:55.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:55 compute-1 python3.9[140542]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:46:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:56 compute-1 sudo[140696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwosjvtbiarpytkceodcjmqnworvpqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089616.218291-63-193121085901752/AnsiballZ_command.py'
Jan 22 13:46:56 compute-1 sudo[140696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:56 compute-1 python3.9[140698]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:46:56 compute-1 sudo[140696]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:57 compute-1 ceph-mon[81715]: pgmap v556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:57.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:58 compute-1 sudo[140875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uptjottihifirqofarwluewdcltzglkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089617.4677467-96-86626493105994/AnsiballZ_systemd_service.py'
Jan 22 13:46:58 compute-1 sudo[140875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:58 compute-1 podman[140830]: 2026-01-22 13:46:58.135585885 +0000 UTC m=+0.112955031 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 13:46:58 compute-1 python3.9[140881]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:46:58 compute-1 systemd[1]: Reloading.
Jan 22 13:46:58 compute-1 systemd-rc-local-generator[140920]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:58 compute-1 systemd-sysv-generator[140924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:58 compute-1 sudo[140875]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:59 compute-1 ceph-mon[81715]: pgmap v557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:46:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:59.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:59 compute-1 python3.9[141075]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:46:59 compute-1 network[141092]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:46:59 compute-1 network[141093]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:46:59 compute-1 network[141094]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:47:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:01 compute-1 sudo[141140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:01 compute-1 sudo[141140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:01 compute-1 sudo[141140]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:01 compute-1 sudo[141169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:47:01 compute-1 sudo[141169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:01 compute-1 sudo[141169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:01.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:01.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:01 compute-1 ceph-mon[81715]: pgmap v558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 13:47:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:47:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:47:03 compute-1 ceph-mon[81715]: pgmap v559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 13:47:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:03.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:05 compute-1 ceph-mon[81715]: pgmap v560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:47:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:05.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:06 compute-1 sudo[141404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpjsmwequplbtwluougoahkdxsolvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089626.297007-153-70817876503947/AnsiballZ_systemd_service.py'
Jan 22 13:47:06 compute-1 sudo[141404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:06 compute-1 python3.9[141406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:06 compute-1 sudo[141404]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:07 compute-1 ceph-mon[81715]: pgmap v561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:47:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:07 compute-1 sudo[141557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwigmfwhnittinwghxbxgragfmpmxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089627.140562-153-44790108044736/AnsiballZ_systemd_service.py'
Jan 22 13:47:07 compute-1 sudo[141557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:07 compute-1 python3.9[141559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:07 compute-1 sudo[141557]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:08 compute-1 sudo[141710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcnfveahhmgjmdrtyfcoilubfqpiyfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089628.1208224-153-99463838058650/AnsiballZ_systemd_service.py'
Jan 22 13:47:08 compute-1 sudo[141710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:08 compute-1 python3.9[141712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:08 compute-1 sudo[141710]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:09 compute-1 sudo[141863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncpuvtajybtazfmjebfkocnwqtomwyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089629.0006754-153-228597829841823/AnsiballZ_systemd_service.py'
Jan 22 13:47:09 compute-1 sudo[141863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:09.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:10 compute-1 python3.9[141865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:10 compute-1 sudo[141863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:11 compute-1 sudo[142016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blyxjvchvvbkeovvqvndqevxfdmyekxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089630.6802857-153-154252622870889/AnsiballZ_systemd_service.py'
Jan 22 13:47:11 compute-1 sudo[142016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:11 compute-1 python3.9[142018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:11 compute-1 sudo[142016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:11 compute-1 sudo[142169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgblqpkuwflkivtqwsvzahkcctfcjfwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089631.6111646-153-207770540439279/AnsiballZ_systemd_service.py'
Jan 22 13:47:11 compute-1 sudo[142169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-1 ceph-mon[81715]: pgmap v562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 13:47:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-1 ceph-mon[81715]: pgmap v563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 13:47:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-1 python3.9[142171]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:12 compute-1 sudo[142169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:13 compute-1 sudo[142322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyrrynelixswtagcfpwkwxdpxukhotvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089632.98887-153-203732866912148/AnsiballZ_systemd_service.py'
Jan 22 13:47:13 compute-1 sudo[142322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:13.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:13.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:13 compute-1 python3.9[142324]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:13 compute-1 sudo[142322]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:13 compute-1 ceph-mon[81715]: pgmap v564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 22 13:47:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:13 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:15 compute-1 ceph-mon[81715]: pgmap v565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 22 13:47:15 compute-1 sudo[142487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapsyyjptydtkqrhoezgqjyvknysjbpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089634.3122833-309-188726678534401/AnsiballZ_file.py'
Jan 22 13:47:15 compute-1 sudo[142487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:15 compute-1 podman[142449]: 2026-01-22 13:47:15.76439449 +0000 UTC m=+0.068360917 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 13:47:15 compute-1 python3.9[142493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:15 compute-1 sudo[142487]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:16 compute-1 sudo[142647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmsqtwkxbsprenfmtlockeehgnyzturd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089636.0888004-309-143893154538397/AnsiballZ_file.py'
Jan 22 13:47:16 compute-1 sudo[142647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:16 compute-1 python3.9[142649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:16 compute-1 sudo[142647]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:16 compute-1 ceph-mon[81715]: pgmap v566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 13:47:17 compute-1 sudo[142799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfwziiynsvyueqeadbmmsiqeqsxfjpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089636.7051349-309-56009225712947/AnsiballZ_file.py'
Jan 22 13:47:17 compute-1 sudo[142799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:17 compute-1 python3.9[142801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:17 compute-1 sudo[142799]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:17.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:17 compute-1 sudo[142951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxflmnpxipyzznalnygupsrjdzdxqdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089637.3435829-309-135123773897770/AnsiballZ_file.py'
Jan 22 13:47:17 compute-1 sudo[142951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:17 compute-1 python3.9[142953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:17 compute-1 sudo[142951]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:18 compute-1 sudo[143103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phknvaqvwfuwfixlefdrrgindiqpyznc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089637.962795-309-266623709009859/AnsiballZ_file.py'
Jan 22 13:47:18 compute-1 sudo[143103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:18 compute-1 python3.9[143105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:18 compute-1 sudo[143103]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:19 compute-1 sudo[143255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbqzohnsimwudmgjzprwsgebwedqufvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089638.8394265-309-52724867899003/AnsiballZ_file.py'
Jan 22 13:47:19 compute-1 sudo[143255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:19 compute-1 python3.9[143257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:19 compute-1 sudo[143255]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:19.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:19.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:19 compute-1 sudo[143407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmdfbmimmvrgtgozbosxbtptuiprkxro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089639.4819057-309-40603957336659/AnsiballZ_file.py'
Jan 22 13:47:19 compute-1 sudo[143407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:20 compute-1 ceph-mon[81715]: pgmap v567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 13:47:20 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:20 compute-1 python3.9[143409]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:20 compute-1 sudo[143407]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:20 compute-1 sudo[143559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzscjgbxhjcsqmczdjwsozosppjilrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089640.5810473-459-247788027887726/AnsiballZ_file.py'
Jan 22 13:47:20 compute-1 sudo[143559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:21 compute-1 python3.9[143561]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:21 compute-1 sudo[143559]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:21.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:21 compute-1 sudo[143711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvieyipvlcqfspydadbvetlbpnfvwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089641.4294808-459-71631450955257/AnsiballZ_file.py'
Jan 22 13:47:21 compute-1 sudo[143711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:21 compute-1 ceph-mon[81715]: pgmap v568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:21 compute-1 python3.9[143713]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:21 compute-1 sudo[143711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:22 compute-1 sudo[143863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwqegguijbrrishonfkocistixdbahmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089642.0305152-459-172778339950228/AnsiballZ_file.py'
Jan 22 13:47:22 compute-1 sudo[143863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:22 compute-1 python3.9[143865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:22 compute-1 sudo[143863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:22 compute-1 sudo[144015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxngvxjiljizubzupvebglyzdqzwbvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089642.7040055-459-229485575239270/AnsiballZ_file.py'
Jan 22 13:47:22 compute-1 sudo[144015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:23 compute-1 python3.9[144017]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:23 compute-1 sudo[144015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-1 ceph-mon[81715]: pgmap v569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:23.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:23 compute-1 sudo[144167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avfwypoloxtphkiizsmuylsbxrzffmll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089643.2962954-459-12297723960139/AnsiballZ_file.py'
Jan 22 13:47:23 compute-1 sudo[144167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:23.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:23 compute-1 python3.9[144169]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:23 compute-1 sudo[144167]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:24 compute-1 sudo[144319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcbnjdltsndfsvuocmzyqruvhjkgrzwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089643.9391747-459-174568293989322/AnsiballZ_file.py'
Jan 22 13:47:24 compute-1 sudo[144319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:24 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:24 compute-1 python3.9[144321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:24 compute-1 sudo[144319]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:25 compute-1 sudo[144471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibhltilszkgxobwaoaqnryrkftapgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089645.0021248-459-108595305196959/AnsiballZ_file.py'
Jan 22 13:47:25 compute-1 sudo[144471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:25.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:25 compute-1 python3.9[144473]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:25 compute-1 sudo[144471]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:26 compute-1 ceph-mon[81715]: pgmap v570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:26 compute-1 sudo[144623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnpaalsqujpceuxfbirrjiieodjbpyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089646.1674435-612-247723179938874/AnsiballZ_command.py'
Jan 22 13:47:26 compute-1 sudo[144623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:26 compute-1 python3.9[144625]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:26 compute-1 sudo[144623]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:27 compute-1 python3.9[144777]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:47:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:29 compute-1 sudo[144938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvqyeaxydzmkcbiugyqtdabzoqlzrgik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089648.7264712-666-123639063743260/AnsiballZ_systemd_service.py'
Jan 22 13:47:29 compute-1 sudo[144938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-1 ceph-mon[81715]: pgmap v571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-1 podman[144901]: 2026-01-22 13:47:29.067893865 +0000 UTC m=+0.108460039 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 13:47:29 compute-1 python3.9[144944]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:47:29 compute-1 systemd[1]: Reloading.
Jan 22 13:47:29 compute-1 systemd-rc-local-generator[144981]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:47:29 compute-1 systemd-sysv-generator[144984]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:47:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:29.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:29.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:29 compute-1 sudo[144938]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-1 ceph-mon[81715]: pgmap v572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:30 compute-1 sudo[145138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmhduefuomqggrbaqyzpohamjzkrbcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089650.138829-690-170969833688050/AnsiballZ_command.py'
Jan 22 13:47:30 compute-1 sudo[145138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:30 compute-1 python3.9[145140]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:30 compute-1 sudo[145138]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:30 compute-1 ceph-mon[81715]: pgmap v573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:31 compute-1 sudo[145291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfbxmjawwtgicprrpoiiqwiumalfkneq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089650.845794-690-202122789750866/AnsiballZ_command.py'
Jan 22 13:47:31 compute-1 sudo[145291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:31 compute-1 python3.9[145293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:31 compute-1 sudo[145291]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:31 compute-1 sudo[145444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zestmqtbnvydmhqbulcidngpuqiddqku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089651.4971266-690-255399930706865/AnsiballZ_command.py'
Jan 22 13:47:31 compute-1 sudo[145444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:32 compute-1 python3.9[145446]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:32 compute-1 sudo[145444]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:32 compute-1 sudo[145597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxxycrmhzqkzubsxfpydtwuyviuqtbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089652.3162973-690-197163245875314/AnsiballZ_command.py'
Jan 22 13:47:32 compute-1 sudo[145597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:32 compute-1 python3.9[145599]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:32 compute-1 sudo[145597]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:33 compute-1 sudo[145750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwhijkpjeadskxnbjweoxxoejekzoeqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089652.9913356-690-237487916516986/AnsiballZ_command.py'
Jan 22 13:47:33 compute-1 sudo[145750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:33 compute-1 python3.9[145752]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:33 compute-1 sudo[145750]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:33.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:33 compute-1 sudo[145903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyxnrtmpxwyrwqtivbdqzczhfsxmjofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089653.5981448-690-54939481693193/AnsiballZ_command.py'
Jan 22 13:47:33 compute-1 sudo[145903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:34 compute-1 python3.9[145905]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:34 compute-1 sudo[145903]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:34 compute-1 sudo[146056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cygessqeaeopiigcrrszzaaivcmprpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089654.3363776-690-264494141217044/AnsiballZ_command.py'
Jan 22 13:47:34 compute-1 sudo[146056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:34 compute-1 python3.9[146058]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:34 compute-1 sudo[146056]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-1 ceph-mon[81715]: pgmap v574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-1 ceph-mon[81715]: pgmap v575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:37.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-1 ceph-mon[81715]: pgmap v576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:38 compute-1 sudo[146209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypcenvwqddfrfkxfotqohbgjlrvymlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089657.6818259-852-98732512881283/AnsiballZ_getent.py'
Jan 22 13:47:38 compute-1 sudo[146209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:38 compute-1 python3.9[146211]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 13:47:38 compute-1 sudo[146209]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:38 compute-1 ceph-mon[81715]: pgmap v577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:39 compute-1 sudo[146362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qodnwkinmmbdnuriamfbluledyqmcmiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089658.6374059-876-175018008892717/AnsiballZ_group.py'
Jan 22 13:47:39 compute-1 sudo[146362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:39 compute-1 python3.9[146364]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:47:39 compute-1 groupadd[146365]: group added to /etc/group: name=libvirt, GID=42473
Jan 22 13:47:39 compute-1 groupadd[146365]: group added to /etc/gshadow: name=libvirt
Jan 22 13:47:39 compute-1 groupadd[146365]: new group: name=libvirt, GID=42473
Jan 22 13:47:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:39.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:39 compute-1 sudo[146362]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:40 compute-1 sudo[146520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoujvrebcokburkpsrfbmhixogvvucji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089659.8272374-900-236478429151295/AnsiballZ_user.py'
Jan 22 13:47:40 compute-1 sudo[146520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:40 compute-1 python3.9[146522]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:47:40 compute-1 useradd[146524]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:47:40 compute-1 sudo[146520]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:41.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:41.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-1 ceph-mon[81715]: pgmap v578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:42 compute-1 sudo[146680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdnlamoevlqxtfdjyalsnlrcbjdosnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089662.2641525-933-277357355551948/AnsiballZ_setup.py'
Jan 22 13:47:42 compute-1 sudo[146680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:42 compute-1 python3.9[146682]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:47:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:42 compute-1 ceph-mon[81715]: pgmap v579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:42 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:43 compute-1 sudo[146680]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:43.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:43 compute-1 sudo[146764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibuigokynnxwbfbtzcfewbbhswfutgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089662.2641525-933-277357355551948/AnsiballZ_dnf.py'
Jan 22 13:47:43 compute-1 sudo[146764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:43 compute-1 python3.9[146766]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:47:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:45 compute-1 ceph-mon[81715]: pgmap v580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:45.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:46 compute-1 podman[146770]: 2026-01-22 13:47:46.102606903 +0000 UTC m=+0.088581362 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:47:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:47 compute-1 ceph-mon[81715]: pgmap v581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:47:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:47.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:47.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:49.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:49 compute-1 ceph-mon[81715]: pgmap v582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:49 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:49.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:51 compute-1 ceph-mon[81715]: pgmap v583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:53 compute-1 ceph-mon[81715]: pgmap v584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:53.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:56 compute-1 ceph-mon[81715]: pgmap v585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:57.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:57 compute-1 ceph-mon[81715]: pgmap v586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-1 ceph-mon[81715]: pgmap v587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:47:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:59.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:47:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:47:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:59.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:00 compute-1 podman[146798]: 2026-01-22 13:48:00.163997564 +0000 UTC m=+0.134559746 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 13:48:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:00 compute-1 ceph-mon[81715]: pgmap v588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:01 compute-1 sudo[146848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:01 compute-1 sudo[146848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-1 sudo[146848]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:01 compute-1 sudo[146874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:48:01 compute-1 sudo[146874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-1 sudo[146874]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-1 sudo[146901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:01 compute-1 sudo[146901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-1 sudo[146901]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-1 sudo[146929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:48:01 compute-1 sudo[146929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:01.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:02 compute-1 sudo[146929]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:03 compute-1 ceph-mon[81715]: pgmap v589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:48:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:48:03 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:03.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:05 compute-1 ceph-mon[81715]: pgmap v590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:05.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:07 compute-1 ceph-mon[81715]: pgmap v591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:09.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:10 compute-1 ceph-mon[81715]: pgmap v592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:10 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:11.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:15 compute-1 ceph-mon[81715]: pgmap v593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:16 compute-1 sudo[147131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:16 compute-1 sudo[147131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-1 sudo[147131]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-1 sudo[147162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:48:16 compute-1 sudo[147162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-1 sudo[147162]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-1 podman[147155]: 2026-01-22 13:48:16.412893889 +0000 UTC m=+0.087915127 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: pgmap v594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: pgmap v595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-1 ceph-mon[81715]: pgmap v596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:17.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:19.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-1 ceph-mon[81715]: pgmap v597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:20 compute-1 ceph-mon[81715]: pgmap v598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:21.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:23 compute-1 ceph-mon[81715]: pgmap v599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-1 ceph-mon[81715]: pgmap v600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:27 compute-1 kernel: SELinux:  Converting 2775 SID table entries...
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:48:27 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:48:27 compute-1 ceph-mon[81715]: pgmap v601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:29 compute-1 ceph-mon[81715]: pgmap v602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:30 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 13:48:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:31 compute-1 podman[147209]: 2026-01-22 13:48:31.156309259 +0000 UTC m=+0.117915915 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 13:48:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:31 compute-1 ceph-mon[81715]: pgmap v603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:32 compute-1 ceph-mon[81715]: pgmap v604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:35 compute-1 ceph-mon[81715]: pgmap v605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:37 compute-1 ceph-mon[81715]: pgmap v606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:38 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:39 compute-1 ceph-mon[81715]: pgmap v607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:39 compute-1 kernel: SELinux:  Converting 2775 SID table entries...
Jan 22 13:48:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:48:39 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:48:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:40 compute-1 ceph-mon[81715]: pgmap v608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:41.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.876170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722876225, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3088, "num_deletes": 507, "total_data_size": 5953851, "memory_usage": 6059256, "flush_reason": "Manual Compaction"}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722903952, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3880836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12619, "largest_seqno": 15702, "table_properties": {"data_size": 3869661, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 30436, "raw_average_key_size": 20, "raw_value_size": 3843024, "raw_average_value_size": 2563, "num_data_blocks": 276, "num_entries": 1499, "num_filter_entries": 1499, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089508, "oldest_key_time": 1769089508, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27850 microseconds, and 8969 cpu microseconds.
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.904028) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3880836 bytes OK
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.904049) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906180) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906196) EVENT_LOG_v1 {"time_micros": 1769089722906191, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5938988, prev total WAL file size 5938988, number of live WAL files 2.
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.907685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3789KB)], [24(8117KB)]
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722907738, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12192683, "oldest_snapshot_seqno": -1}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 5026 keys, 10032565 bytes, temperature: kUnknown
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722988316, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 10032565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9996957, "index_size": 21930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125776, "raw_average_key_size": 25, "raw_value_size": 9903805, "raw_average_value_size": 1970, "num_data_blocks": 912, "num_entries": 5026, "num_filter_entries": 5026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.988602) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10032565 bytes
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.991378) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.1 rd, 124.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6057, records dropped: 1031 output_compression: NoCompression
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.991399) EVENT_LOG_v1 {"time_micros": 1769089722991388, "job": 12, "event": "compaction_finished", "compaction_time_micros": 80677, "compaction_time_cpu_micros": 27521, "output_level": 6, "num_output_files": 1, "total_output_size": 10032565, "num_input_records": 6057, "num_output_records": 5026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722992072, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722993259, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.907589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:43 compute-1 ceph-mon[81715]: pgmap v609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:43.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:45 compute-1 ceph-mon[81715]: pgmap v610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:45.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:46 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 13:48:47 compute-1 podman[147240]: 2026-01-22 13:48:47.075808518 +0000 UTC m=+0.056258431 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:48:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:48 compute-1 ceph-mon[81715]: pgmap v611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:48 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:49 compute-1 ceph-mon[81715]: pgmap v612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:49 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:49.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:52 compute-1 ceph-mon[81715]: pgmap v613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-1 ceph-mon[81715]: pgmap v614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:53.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:55 compute-1 ceph-mon[81715]: pgmap v615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:55.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:55.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:48:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:48:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:57 compute-1 ceph-mon[81715]: pgmap v616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:58 compute-1 ceph-mon[81715]: pgmap v617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:58 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:59.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:48:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:59.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:01 compute-1 ceph-mon[81715]: pgmap v618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:01.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:02 compute-1 podman[153242]: 2026-01-22 13:49:02.114089675 +0000 UTC m=+0.093028304 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 13:49:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-1 ceph-mon[81715]: pgmap v619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:06 compute-1 ceph-mon[81715]: pgmap v620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:07 compute-1 ceph-mon[81715]: pgmap v621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:07.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:09.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:09 compute-1 ceph-mon[81715]: pgmap v622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:09 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:11.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:12 compute-1 ceph-mon[81715]: pgmap v623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:13.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:13 compute-1 ceph-mon[81715]: pgmap v624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:15 compute-1 ceph-mon[81715]: pgmap v625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:15.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:16 compute-1 sudo[162812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-1 sudo[162812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-1 sudo[162812]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-1 sudo[162889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:49:16 compute-1 sudo[162889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-1 sudo[162889]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-1 sudo[162956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-1 sudo[162956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-1 sudo[162956]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-1 sudo[163021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:49:16 compute-1 sudo[163021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:17 compute-1 sudo[163021]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:17.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:17.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:18 compute-1 podman[163894]: 2026-01-22 13:49:18.06544634 +0000 UTC m=+0.054037177 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:49:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-1 ceph-mon[81715]: pgmap v626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:49:19 compute-1 ceph-mon[81715]: pgmap v627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:19 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:19.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:21.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-1 ceph-mon[81715]: pgmap v628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-1 ceph-mon[81715]: pgmap v629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:25.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:27.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:27.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:28 compute-1 ceph-mon[81715]: pgmap v630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:29.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-1 ceph-mon[81715]: pgmap v631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-1 ceph-mon[81715]: pgmap v632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-1 ceph-mon[81715]: pgmap v633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-1 ceph-mon[81715]: pgmap v634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:33 compute-1 podman[164308]: 2026-01-22 13:49:33.193444319 +0000 UTC m=+0.132492327 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 13:49:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:33.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:34 compute-1 kernel: SELinux:  Converting 2776 SID table entries...
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:49:34 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:49:35 compute-1 groupadd[164349]: group added to /etc/group: name=dnsmasq, GID=993
Jan 22 13:49:35 compute-1 groupadd[164349]: group added to /etc/gshadow: name=dnsmasq
Jan 22 13:49:35 compute-1 groupadd[164349]: new group: name=dnsmasq, GID=993
Jan 22 13:49:35 compute-1 useradd[164356]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 22 13:49:35 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:49:35 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 13:49:35 compute-1 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 13:49:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:35.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:36 compute-1 groupadd[164369]: group added to /etc/group: name=clevis, GID=992
Jan 22 13:49:36 compute-1 groupadd[164369]: group added to /etc/gshadow: name=clevis
Jan 22 13:49:36 compute-1 groupadd[164369]: new group: name=clevis, GID=992
Jan 22 13:49:36 compute-1 useradd[164376]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 22 13:49:36 compute-1 usermod[164386]: add 'clevis' to group 'tss'
Jan 22 13:49:36 compute-1 usermod[164386]: add 'clevis' to shadow group 'tss'
Jan 22 13:49:36 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:37 compute-1 sudo[164399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:37 compute-1 sudo[164399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:37 compute-1 sudo[164399]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-1 sudo[164425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:49:37 compute-1 sudo[164425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:37 compute-1 sudo[164425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-1 ceph-mon[81715]: pgmap v635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-1 ceph-mon[81715]: pgmap v636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:38 compute-1 polkitd[43403]: Reloading rules
Jan 22 13:49:38 compute-1 polkitd[43403]: Collecting garbage unconditionally...
Jan 22 13:49:39 compute-1 polkitd[43403]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:49:39 compute-1 polkitd[43403]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:49:39 compute-1 polkitd[43403]: Finished loading, compiling and executing 3 rules
Jan 22 13:49:39 compute-1 polkitd[43403]: Reloading rules
Jan 22 13:49:39 compute-1 polkitd[43403]: Collecting garbage unconditionally...
Jan 22 13:49:39 compute-1 polkitd[43403]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:49:39 compute-1 polkitd[43403]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:49:39 compute-1 polkitd[43403]: Finished loading, compiling and executing 3 rules
Jan 22 13:49:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:39 compute-1 ceph-mon[81715]: pgmap v637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:39.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:40 compute-1 groupadd[164626]: group added to /etc/group: name=ceph, GID=167
Jan 22 13:49:40 compute-1 groupadd[164626]: group added to /etc/gshadow: name=ceph
Jan 22 13:49:40 compute-1 groupadd[164626]: new group: name=ceph, GID=167
Jan 22 13:49:40 compute-1 useradd[164632]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 22 13:49:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:41.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:41 compute-1 ceph-mon[81715]: pgmap v638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:42 compute-1 ceph-mon[81715]: pgmap v639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:43 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 13:49:43 compute-1 sshd[1008]: Received signal 15; terminating.
Jan 22 13:49:43 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 13:49:43 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 13:49:43 compute-1 systemd[1]: sshd.service: Consumed 2.498s CPU time, read 564.0K from disk, written 8.0K to disk.
Jan 22 13:49:43 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 13:49:43 compute-1 systemd[1]: Stopping sshd-keygen.target...
Jan 22 13:49:43 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:43 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:43 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:43 compute-1 systemd[1]: Reached target sshd-keygen.target.
Jan 22 13:49:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:43 compute-1 systemd[1]: Starting OpenSSH server daemon...
Jan 22 13:49:43 compute-1 sshd[165237]: Server listening on 0.0.0.0 port 22.
Jan 22 13:49:43 compute-1 sshd[165237]: Server listening on :: port 22.
Jan 22 13:49:43 compute-1 systemd[1]: Started OpenSSH server daemon.
Jan 22 13:49:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:43.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:44 compute-1 ceph-mon[81715]: pgmap v640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:49:45 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:49:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:45.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:45 compute-1 systemd[1]: Reloading.
Jan 22 13:49:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:45 compute-1 systemd-rc-local-generator[165496]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:45 compute-1 systemd-sysv-generator[165500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:46 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:49:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:49:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:49 compute-1 podman[168876]: 2026-01-22 13:49:49.072785263 +0000 UTC m=+0.056950926 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:49:49 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:49 compute-1 ceph-mon[81715]: pgmap v641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:49.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:49.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:50 compute-1 sudo[146764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:50 compute-1 ceph-mon[81715]: pgmap v642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:50 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:51 compute-1 sudo[171404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbythssqybehofmdleciawmldmizapki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089790.9306872-969-121415331643259/AnsiballZ_systemd.py'
Jan 22 13:49:51 compute-1 sudo[171404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:51 compute-1 ceph-mon[81715]: pgmap v643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:51 compute-1 python3.9[171430]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:51 compute-1 systemd[1]: Reloading.
Jan 22 13:49:52 compute-1 systemd-sysv-generator[171855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:52 compute-1 systemd-rc-local-generator[171850]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:52 compute-1 sudo[171404]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:52 compute-1 sudo[172971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfinnmvqdsbttepikflxjxojdnfmgaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089792.4262648-969-121854462001952/AnsiballZ_systemd.py'
Jan 22 13:49:52 compute-1 sudo[172971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:53 compute-1 python3.9[172990]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:53 compute-1 systemd[1]: Reloading.
Jan 22 13:49:53 compute-1 systemd-sysv-generator[173412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:53 compute-1 systemd-rc-local-generator[173407]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:53 compute-1 sudo[172971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:53.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:54 compute-1 sudo[174143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkyclrbfldrlbuexdwdtxeapbbigcxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089793.7273352-969-101095480691409/AnsiballZ_systemd.py'
Jan 22 13:49:54 compute-1 sudo[174143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:54 compute-1 python3.9[174163]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:54 compute-1 systemd[1]: Reloading.
Jan 22 13:49:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:54 compute-1 ceph-mon[81715]: pgmap v644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:54 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:54 compute-1 systemd-rc-local-generator[174448]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:54 compute-1 systemd-sysv-generator[174452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:54 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:49:54 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:49:54 compute-1 systemd[1]: man-db-cache-update.service: Consumed 11.167s CPU time.
Jan 22 13:49:54 compute-1 systemd[1]: run-r9e99c1be9736462a9f21298dbcda3d62.service: Deactivated successfully.
Jan 22 13:49:54 compute-1 sudo[174143]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:55 compute-1 sudo[174610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoasfsbgkytlztpdzaeohwxtddzqzrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089794.8335714-969-268407948356301/AnsiballZ_systemd.py'
Jan 22 13:49:55 compute-1 sudo[174610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:55 compute-1 ceph-mon[81715]: pgmap v645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:55 compute-1 python3.9[174612]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:55 compute-1 systemd[1]: Reloading.
Jan 22 13:49:55 compute-1 systemd-rc-local-generator[174642]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:55 compute-1 systemd-sysv-generator[174646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:55 compute-1 sudo[174610]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:56 compute-1 sudo[174800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpymnhyrqqvwlwvwfuepyunpowzpvti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089796.5256932-1056-247189899275758/AnsiballZ_systemd.py'
Jan 22 13:49:56 compute-1 sudo[174800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:57 compute-1 python3.9[174802]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:57 compute-1 systemd[1]: Reloading.
Jan 22 13:49:57 compute-1 systemd-rc-local-generator[174832]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:57 compute-1 systemd-sysv-generator[174835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:57 compute-1 ceph-mon[81715]: pgmap v646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:57 compute-1 sudo[174800]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:57.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:58 compute-1 sudo[174990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yahmhzyprzutmuxbqlknmqqbddweanvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089797.7026744-1056-64285923463164/AnsiballZ_systemd.py'
Jan 22 13:49:58 compute-1 sudo[174990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:58 compute-1 python3.9[174992]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:58 compute-1 systemd[1]: Reloading.
Jan 22 13:49:58 compute-1 systemd-rc-local-generator[175022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:58 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:58 compute-1 systemd-sysv-generator[175026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:58 compute-1 sudo[174990]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:59 compute-1 sudo[175179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdfwzjzwhuwmfrxwoznkzeujnowrzod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089798.924875-1056-265732305928387/AnsiballZ_systemd.py'
Jan 22 13:49:59 compute-1 sudo[175179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:59 compute-1 ceph-mon[81715]: pgmap v647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:59 compute-1 python3.9[175181]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:59 compute-1 systemd[1]: Reloading.
Jan 22 13:49:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:59 compute-1 systemd-rc-local-generator[175212]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:59 compute-1 systemd-sysv-generator[175216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:49:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:59 compute-1 sudo[175179]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:00 compute-1 sudo[175369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkleprntkfxqysturkuuolhhkoqhqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089800.080385-1056-11436136573560/AnsiballZ_systemd.py'
Jan 22 13:50:00 compute-1 sudo[175369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:00 compute-1 python3.9[175371]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 13:50:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 13:50:00 compute-1 sudo[175369]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:01 compute-1 sudo[175524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmbnbkgoybkzafdnxjsifzsstqnpndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089800.9651587-1056-61251213185137/AnsiballZ_systemd.py'
Jan 22 13:50:01 compute-1 sudo[175524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:01 compute-1 python3.9[175526]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:01 compute-1 systemd[1]: Reloading.
Jan 22 13:50:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:01.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:01 compute-1 ceph-mon[81715]: pgmap v648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:01 compute-1 systemd-sysv-generator[175559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:50:01 compute-1 systemd-rc-local-generator[175556]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:50:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:50:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:50:01 compute-1 sudo[175524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:03 compute-1 ceph-mon[81715]: pgmap v649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:04 compute-1 podman[175590]: 2026-01-22 13:50:04.117514011 +0000 UTC m=+0.100342522 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 13:50:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:04 compute-1 sudo[175740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabdrafcilmfingtvpzwrzrgnbnfgvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089804.559815-1164-46390841393154/AnsiballZ_systemd.py'
Jan 22 13:50:04 compute-1 sudo[175740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:05 compute-1 python3.9[175742]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:50:05 compute-1 systemd[1]: Reloading.
Jan 22 13:50:05 compute-1 systemd-rc-local-generator[175772]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:50:05 compute-1 systemd-sysv-generator[175775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:50:05 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 13:50:05 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 13:50:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:05 compute-1 ceph-mon[81715]: pgmap v650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:05 compute-1 sudo[175740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:05.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:06 compute-1 sudo[175933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwrtmpuzwifhrnfygolmcupqzqmjbmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089805.907601-1188-171591266935378/AnsiballZ_systemd.py'
Jan 22 13:50:06 compute-1 sudo[175933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:06 compute-1 python3.9[175935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:06 compute-1 sudo[175933]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:07 compute-1 sudo[176088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfzgkaklkderkzcvwfvmigiimecnjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089806.8279464-1188-63194955771738/AnsiballZ_systemd.py'
Jan 22 13:50:07 compute-1 sudo[176088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:07 compute-1 python3.9[176090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:07 compute-1 sudo[176088]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:07 compute-1 ceph-mon[81715]: pgmap v651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:07.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:07 compute-1 sudo[176243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdvawymcfapjzmwbbtjnyryscivzplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089807.6778474-1188-244777319803999/AnsiballZ_systemd.py'
Jan 22 13:50:07 compute-1 sudo[176243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:08 compute-1 python3.9[176245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:08 compute-1 sudo[176243]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:08 compute-1 ceph-mon[81715]: pgmap v652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:08 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:08 compute-1 sudo[176398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbflrhosrvhyjxeoxvhpuwqvazaefsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089808.5038195-1188-140944219174187/AnsiballZ_systemd.py'
Jan 22 13:50:08 compute-1 sudo[176398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:09 compute-1 python3.9[176400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:09 compute-1 sudo[176398]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:09 compute-1 sudo[176553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhxrqsqosdgevnbfygkgiyivqcsxylmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089809.3500118-1188-104338083560055/AnsiballZ_systemd.py'
Jan 22 13:50:09 compute-1 sudo[176553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:09 compute-1 python3.9[176555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:09.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:10 compute-1 sudo[176553]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:10 compute-1 sudo[176708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjwjyhfubthucuttgdbmgxghahydlrfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089810.1529362-1188-190949515214267/AnsiballZ_systemd.py'
Jan 22 13:50:10 compute-1 sudo[176708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:10 compute-1 python3.9[176710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:10 compute-1 ceph-mon[81715]: pgmap v653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:10 compute-1 sudo[176708]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:11 compute-1 sudo[176863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txdxfgonhjjvaxvmaswjaixoubfqvbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089810.9632843-1188-253281818774397/AnsiballZ_systemd.py'
Jan 22 13:50:11 compute-1 sudo[176863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:11 compute-1 python3.9[176865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:11 compute-1 sudo[176863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:11.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:12 compute-1 sudo[177018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiridcfsnictgggzubndkmmencnyfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089811.8164995-1188-90376841390672/AnsiballZ_systemd.py'
Jan 22 13:50:12 compute-1 sudo[177018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:12 compute-1 python3.9[177020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:12 compute-1 sudo[177018]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:12 compute-1 sudo[177173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtsjaazulxphosflrwzovuzyzbsznjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089812.6265483-1188-161838555348403/AnsiballZ_systemd.py'
Jan 22 13:50:12 compute-1 sudo[177173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:12 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:12 compute-1 ceph-mon[81715]: pgmap v654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:13 compute-1 python3.9[177175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:13 compute-1 sudo[177173]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:13 compute-1 sudo[177328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhxpckqrerrnomvcyueleojvprfcgqiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089813.4115367-1188-258868101512290/AnsiballZ_systemd.py'
Jan 22 13:50:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:13 compute-1 sudo[177328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:14 compute-1 python3.9[177330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:14 compute-1 sudo[177328]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:14 compute-1 sudo[177483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltydhzgkuthnpbjzsneysacjhxxdsthr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089814.257635-1188-88401517386690/AnsiballZ_systemd.py'
Jan 22 13:50:14 compute-1 sudo[177483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:14 compute-1 python3.9[177485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:14 compute-1 sudo[177483]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:15 compute-1 sudo[177638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lldnappeptvznawtxflyrtdeswuiwnvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089815.0539954-1188-78780372094715/AnsiballZ_systemd.py'
Jan 22 13:50:15 compute-1 sudo[177638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:15 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:15 compute-1 python3.9[177640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:15 compute-1 sudo[177638]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:16 compute-1 sudo[177793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofahdprpnkhwtzegquzjnplxkzpwqxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089815.922193-1188-216404298257346/AnsiballZ_systemd.py'
Jan 22 13:50:16 compute-1 sudo[177793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:16 compute-1 ceph-mon[81715]: pgmap v655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:16 compute-1 python3.9[177795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:16 compute-1 sudo[177793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:17 compute-1 sudo[177948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbyirmrodrxatmzmzeitfgskxjmcotat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089816.7744968-1188-276392799329931/AnsiballZ_systemd.py'
Jan 22 13:50:17 compute-1 sudo[177948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:17 compute-1 python3.9[177950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:17 compute-1 sudo[177948]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:17 compute-1 ceph-mon[81715]: pgmap v656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:19 compute-1 sudo[178103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoxgjojdnhammwukotezcrrcrdzxcmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089818.8112774-1494-1219491825431/AnsiballZ_file.py'
Jan 22 13:50:19 compute-1 sudo[178103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:19 compute-1 podman[178105]: 2026-01-22 13:50:19.190123197 +0000 UTC m=+0.059392683 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 13:50:19 compute-1 python3.9[178106]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:19 compute-1 sudo[178103]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:19 compute-1 sudo[178274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knctriitmbcprhdivrmttaswmsmnicjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089819.4935143-1494-118152939634113/AnsiballZ_file.py'
Jan 22 13:50:19 compute-1 sudo[178274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:19 compute-1 python3.9[178276]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:20 compute-1 sudo[178274]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:20 compute-1 ceph-mon[81715]: pgmap v657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:20 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:20 compute-1 sudo[178426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpupvzomnbmcwctfzvtyhsnumyrgdblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089820.1483867-1494-129663407316418/AnsiballZ_file.py'
Jan 22 13:50:20 compute-1 sudo[178426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:20 compute-1 python3.9[178428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:20 compute-1 sudo[178426]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:21 compute-1 sudo[178578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioneawwwkokisqehedkybfmgxwrjfzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089820.8083074-1494-41535902554995/AnsiballZ_file.py'
Jan 22 13:50:21 compute-1 sudo[178578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:21 compute-1 ceph-mon[81715]: pgmap v658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:21 compute-1 python3.9[178580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:21 compute-1 sudo[178578]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:21 compute-1 sudo[178730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvojkwvjiirzhtsiunbeivqfudjflfqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089821.4424174-1494-265803588591424/AnsiballZ_file.py'
Jan 22 13:50:21 compute-1 sudo[178730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:22 compute-1 python3.9[178732]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:22 compute-1 sudo[178730]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:22 compute-1 sudo[178882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzwiiefahnecravhdwhhddfavipwogjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089822.216122-1494-118948281999539/AnsiballZ_file.py'
Jan 22 13:50:22 compute-1 sudo[178882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:22 compute-1 python3.9[178884]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:22 compute-1 sudo[178882]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:23 compute-1 ceph-mon[81715]: pgmap v659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:50:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:23.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:50:23 compute-1 python3.9[179034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:50:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:24 compute-1 sudo[179184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbcnvremtqebaalwnmfwhihqnjmjniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089824.164804-1647-202505359062767/AnsiballZ_stat.py'
Jan 22 13:50:24 compute-1 sudo[179184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:24 compute-1 python3.9[179186]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:24 compute-1 sudo[179184]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:25 compute-1 sudo[179309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevlpraotjzvkxqxykwnwvfhdwequvdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089824.164804-1647-202505359062767/AnsiballZ_copy.py'
Jan 22 13:50:25 compute-1 sudo[179309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:25 compute-1 python3.9[179311]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089824.164804-1647-202505359062767/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:25 compute-1 sudo[179309]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:25.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:26 compute-1 ceph-mon[81715]: pgmap v660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:26 compute-1 sudo[179461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlymqsaznmxzhtekumuujozekfbwbltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089825.8114429-1647-46758468724157/AnsiballZ_stat.py'
Jan 22 13:50:26 compute-1 sudo[179461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:26 compute-1 python3.9[179463]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:26 compute-1 sudo[179461]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:26 compute-1 sudo[179586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzydpqxcwczjcxutwetrojtmtrpypyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089825.8114429-1647-46758468724157/AnsiballZ_copy.py'
Jan 22 13:50:26 compute-1 sudo[179586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:27 compute-1 python3.9[179588]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089825.8114429-1647-46758468724157/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:27 compute-1 sudo[179586]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:27 compute-1 ceph-mon[81715]: pgmap v661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:27 compute-1 sudo[179740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykiocpybaniqxmapzrittjgwtxexuyhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089827.2377377-1647-171893181744896/AnsiballZ_stat.py'
Jan 22 13:50:27 compute-1 sudo[179740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:27 compute-1 sshd-session[179630]: Connection closed by authenticating user root 45.148.10.121 port 60834 [preauth]
Jan 22 13:50:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:27 compute-1 python3.9[179742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:27 compute-1 sudo[179740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:27.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:28 compute-1 sudo[179865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwyaziwatdelbsrasbtjbivknsakgohj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089827.2377377-1647-171893181744896/AnsiballZ_copy.py'
Jan 22 13:50:28 compute-1 sudo[179865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:28 compute-1 python3.9[179867]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089827.2377377-1647-171893181744896/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:28 compute-1 sudo[179865]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:28 compute-1 sudo[180017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukahelpgaxjyymhlutdoyfgsytcxqkjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089828.5214965-1647-226599488530983/AnsiballZ_stat.py'
Jan 22 13:50:28 compute-1 sudo[180017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:29 compute-1 python3.9[180019]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:29 compute-1 sudo[180017]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:29 compute-1 sudo[180142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwlacjonpckjkrcqshulrtbxruqarxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089828.5214965-1647-226599488530983/AnsiballZ_copy.py'
Jan 22 13:50:29 compute-1 sudo[180142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:29 compute-1 python3.9[180144]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089828.5214965-1647-226599488530983/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:29 compute-1 sudo[180142]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:30 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:30 compute-1 sudo[180294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzwkdsnmnbcmvyqfkypgwyoliocbgaqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089830.0696983-1647-228333974146193/AnsiballZ_stat.py'
Jan 22 13:50:30 compute-1 sudo[180294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:30 compute-1 python3.9[180296]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:30 compute-1 sudo[180294]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:30 compute-1 sudo[180419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtsarslghpgvbspgxzcfyhqlauqwtdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089830.0696983-1647-228333974146193/AnsiballZ_copy.py'
Jan 22 13:50:30 compute-1 sudo[180419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:31 compute-1 python3.9[180421]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089830.0696983-1647-228333974146193/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:31 compute-1 sudo[180419]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:31 compute-1 sudo[180571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjoxcgrjbzspzicqrnepesobfoukkual ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089831.3555472-1647-78635412338917/AnsiballZ_stat.py'
Jan 22 13:50:31 compute-1 sudo[180571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:31 compute-1 python3.9[180573]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:31 compute-1 sudo[180571]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:32 compute-1 ceph-mon[81715]: pgmap v662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:32 compute-1 ceph-mon[81715]: pgmap v663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:50:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:50:32 compute-1 sudo[180696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aohqlgzithvfvovxawpydwooauckqqzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089831.3555472-1647-78635412338917/AnsiballZ_copy.py'
Jan 22 13:50:32 compute-1 sudo[180696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:32 compute-1 python3.9[180698]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089831.3555472-1647-78635412338917/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:32 compute-1 sudo[180696]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:32 compute-1 sudo[180848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmulqrjmrsytevjvakjfqzeybuxyafvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089832.6156254-1647-72938286260976/AnsiballZ_stat.py'
Jan 22 13:50:32 compute-1 sudo[180848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-1 ceph-mon[81715]: pgmap v664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:33 compute-1 python3.9[180850]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:33 compute-1 sudo[180848]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:33 compute-1 sudo[180971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlsymfwiwgrwnosxqdcmoxdybtkqrbwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089832.6156254-1647-72938286260976/AnsiballZ_copy.py'
Jan 22 13:50:33 compute-1 sudo[180971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:33 compute-1 python3.9[180973]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089832.6156254-1647-72938286260976/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:33 compute-1 sudo[180971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:34 compute-1 sudo[181123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwrozifbsjzqcoyqsbcxdffjntusrltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089833.892297-1647-45991852912940/AnsiballZ_stat.py'
Jan 22 13:50:34 compute-1 sudo[181123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:34 compute-1 podman[181125]: 2026-01-22 13:50:34.276596018 +0000 UTC m=+0.077906965 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 13:50:34 compute-1 python3.9[181126]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:34 compute-1 sudo[181123]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:34 compute-1 sudo[181274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkczrjvyjesdqwiqwgsuumucsntmfwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089833.892297-1647-45991852912940/AnsiballZ_copy.py'
Jan 22 13:50:34 compute-1 sudo[181274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:34 compute-1 python3.9[181276]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089833.892297-1647-45991852912940/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:34 compute-1 sudo[181274]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:35 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:35 compute-1 ceph-mon[81715]: pgmap v665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:36 compute-1 ceph-mon[81715]: pgmap v666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:37 compute-1 sudo[181426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlxmipgznflccjlqwyuimhlwlfoowlik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089837.0801027-1986-50190770449421/AnsiballZ_command.py'
Jan 22 13:50:37 compute-1 sudo[181426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:37 compute-1 python3.9[181428]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 13:50:37 compute-1 sudo[181429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-1 sudo[181429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-1 sudo[181429]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-1 sudo[181455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:50:37 compute-1 sudo[181455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-1 sudo[181455]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-1 sudo[181480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-1 sudo[181480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-1 sudo[181480]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-1 sudo[181505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:50:37 compute-1 sudo[181505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:38 compute-1 sudo[181426]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:38 compute-1 sudo[181505]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:38 compute-1 sudo[181710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxvnaxabbkauefpnuqgsrdgvbobnizik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089838.4523466-2013-188355317078180/AnsiballZ_file.py'
Jan 22 13:50:38 compute-1 sudo[181710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:38 compute-1 python3.9[181712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:38 compute-1 sudo[181710]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:39 compute-1 sudo[181862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgxctwunzqzrrzzijonnrpmsjjlginp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089839.1474714-2013-180840156442284/AnsiballZ_file.py'
Jan 22 13:50:39 compute-1 sudo[181862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:39 compute-1 python3.9[181864]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:39 compute-1 sudo[181862]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:39 compute-1 ceph-mon[81715]: pgmap v667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:39 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:50:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:50:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:40 compute-1 sudo[182014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgdvqizlkwgklezlozgsbncpipmlijaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089839.8200502-2013-203737150193408/AnsiballZ_file.py'
Jan 22 13:50:40 compute-1 sudo[182014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:40.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:40 compute-1 python3.9[182016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:40 compute-1 sudo[182014]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:40 compute-1 sudo[182166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycypojvtchgexexbyjzthysdickybggg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089840.4748588-2013-96833084131321/AnsiballZ_file.py'
Jan 22 13:50:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:50:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:50:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:50:40 compute-1 sudo[182166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:40 compute-1 python3.9[182168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:41 compute-1 sudo[182166]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:41 compute-1 sudo[182318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjobosbkvuehcfegyvjovmctksuailn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089841.1451929-2013-187240587147326/AnsiballZ_file.py'
Jan 22 13:50:41 compute-1 sudo[182318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:41 compute-1 python3.9[182320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:41 compute-1 sudo[182318]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:42 compute-1 sudo[182470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jblhrgllmnyryqbcyzjvizbswcwdtdjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089841.797329-2013-34539492182338/AnsiballZ_file.py'
Jan 22 13:50:42 compute-1 sudo[182470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:42 compute-1 ceph-mon[81715]: pgmap v668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:42 compute-1 python3.9[182472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:42 compute-1 sudo[182470]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:42 compute-1 sudo[182622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjhglxjqbjseehiqdsrbgtkvyaqwvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089842.4501116-2013-81855026676095/AnsiballZ_file.py'
Jan 22 13:50:42 compute-1 sudo[182622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:43 compute-1 python3.9[182624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:43 compute-1 sudo[182622]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:43 compute-1 ceph-mon[81715]: pgmap v669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:43 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:43 compute-1 sudo[182774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cflbboitizzfrthzgqererodcgbzjlvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089843.2103703-2013-137513702504889/AnsiballZ_file.py'
Jan 22 13:50:43 compute-1 sudo[182774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:43 compute-1 python3.9[182776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:43 compute-1 sudo[182774]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:44 compute-1 sudo[182926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmswwoxbvofeqowysvjdnqahlrztxmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089843.8162901-2013-270775994800166/AnsiballZ_file.py'
Jan 22 13:50:44 compute-1 sudo[182926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:44 compute-1 python3.9[182928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:44 compute-1 sudo[182926]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:44 compute-1 sudo[183078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqcuswssricjuxvrudxkpexxhabxsbso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089844.4962819-2013-156853313173033/AnsiballZ_file.py'
Jan 22 13:50:44 compute-1 sudo[183078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:44 compute-1 python3.9[183080]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:44 compute-1 sudo[183078]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:45 compute-1 ceph-mon[81715]: pgmap v670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:45 compute-1 sudo[183230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazvslomxswdtjdvjtjhkkcyprsthdse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089845.0791264-2013-12852895319880/AnsiballZ_file.py'
Jan 22 13:50:45 compute-1 sudo[183230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:45 compute-1 python3.9[183232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:45 compute-1 sudo[183230]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:46 compute-1 sudo[183382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yocznmzyjukinbxkzhjgttqwexkzairl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089845.722-2013-24522867267484/AnsiballZ_file.py'
Jan 22 13:50:46 compute-1 sudo[183382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:46 compute-1 python3.9[183384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:46 compute-1 sudo[183382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:46 compute-1 sudo[183534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjkgvgkabdgtpmyaucyqkqoyhlhsctwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089846.403462-2013-214825884026695/AnsiballZ_file.py'
Jan 22 13:50:46 compute-1 sudo[183534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:46 compute-1 python3.9[183536]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:46 compute-1 sudo[183534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.423 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:50:47 compute-1 sudo[183686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tenyuhhipnxfbajlakgupgkmlxshrjmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089847.112199-2013-249268441902160/AnsiballZ_file.py'
Jan 22 13:50:47 compute-1 sudo[183686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:47 compute-1 python3.9[183688]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:47 compute-1 sudo[183686]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:49 compute-1 sudo[183838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgrloocmlahkuyenwwzldyxpwhzvvomj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089848.69712-2310-91455059603633/AnsiballZ_stat.py'
Jan 22 13:50:49 compute-1 sudo[183838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:49 compute-1 python3.9[183840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:49 compute-1 sudo[183838]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:49 compute-1 sudo[183975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqiulldyovxjjzppfvwnsnemkdftgjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089848.69712-2310-91455059603633/AnsiballZ_copy.py'
Jan 22 13:50:49 compute-1 sudo[183975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:49 compute-1 podman[183935]: 2026-01-22 13:50:49.635734047 +0000 UTC m=+0.056932656 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 13:50:49 compute-1 python3.9[183983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089848.69712-2310-91455059603633/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:49 compute-1 sudo[183975]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:50 compute-1 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 13:50:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:50 compute-1 sudo[184133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdcgtckulicjqfdvgbyhqezkacpphwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089849.9758122-2310-214652688515803/AnsiballZ_stat.py'
Jan 22 13:50:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:50 compute-1 sudo[184133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:50 compute-1 python3.9[184135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:50 compute-1 sudo[184133]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:50 compute-1 sudo[184256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqyannlwbnovijvlamrunagjgnvsqocc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089849.9758122-2310-214652688515803/AnsiballZ_copy.py'
Jan 22 13:50:50 compute-1 sudo[184256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:51 compute-1 python3.9[184258]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089849.9758122-2310-214652688515803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:51 compute-1 sudo[184256]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:51 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:51 compute-1 sudo[184408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hceihtbnkgkqubrwplpovcwhcfirhywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089851.2439692-2310-158632212388995/AnsiballZ_stat.py'
Jan 22 13:50:51 compute-1 sudo[184408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:51 compute-1 python3.9[184410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:51 compute-1 sudo[184408]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:52 compute-1 sudo[184531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwofqvnjjjzkveczlqzxdbyduaxijjxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089851.2439692-2310-158632212388995/AnsiballZ_copy.py'
Jan 22 13:50:52 compute-1 sudo[184531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:52 compute-1 python3.9[184533]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089851.2439692-2310-158632212388995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:52 compute-1 sudo[184531]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-1 sudo[184633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:52 compute-1 sudo[184633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:52 compute-1 sudo[184633]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-1 sudo[184682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:50:52 compute-1 sudo[184682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:52 compute-1 sudo[184682]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-1 sudo[184732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgrcaerztkkbokapeahcspuxdsyetgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089852.5605118-2310-151784308468463/AnsiballZ_stat.py'
Jan 22 13:50:52 compute-1 sudo[184732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:53 compute-1 python3.9[184735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:53 compute-1 sudo[184732]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: pgmap v671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: pgmap v672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: pgmap v673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:53 compute-1 sudo[184856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnkfdmfzolcvyruatorwrrugyhyhwrjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089852.5605118-2310-151784308468463/AnsiballZ_copy.py'
Jan 22 13:50:53 compute-1 sudo[184856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:53 compute-1 python3.9[184858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089852.5605118-2310-151784308468463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:53 compute-1 sudo[184856]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:53 compute-1 ceph-mon[81715]: pgmap v674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:50:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:50:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:54 compute-1 sudo[185008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqkmmdrsilvefdfdhhmpdmeqoapgwzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089853.9173703-2310-122207087521132/AnsiballZ_stat.py'
Jan 22 13:50:54 compute-1 sudo[185008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:54 compute-1 python3.9[185010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:54 compute-1 sudo[185008]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:54 compute-1 sudo[185131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfvnztbupivmbmgrzaojxvftwxtfyws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089853.9173703-2310-122207087521132/AnsiballZ_copy.py'
Jan 22 13:50:54 compute-1 sudo[185131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:55 compute-1 python3.9[185133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089853.9173703-2310-122207087521132/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:55 compute-1 sudo[185131]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:55 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:55 compute-1 ceph-mon[81715]: pgmap v675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:55 compute-1 sudo[185283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqleapryyguzmxnivulagxnkacthtqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089855.2714481-2310-56696901510138/AnsiballZ_stat.py'
Jan 22 13:50:55 compute-1 sudo[185283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:55 compute-1 python3.9[185285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:55 compute-1 sudo[185283]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:56 compute-1 sudo[185406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjqntxonlujzadqhimudwopoxhcqfyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089855.2714481-2310-56696901510138/AnsiballZ_copy.py'
Jan 22 13:50:56 compute-1 sudo[185406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:50:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:50:56 compute-1 python3.9[185408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089855.2714481-2310-56696901510138/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:56 compute-1 sudo[185406]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:56 compute-1 sudo[185558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoedvxqefwyqoglhneqlvziokldxjdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089856.543057-2310-18921804727397/AnsiballZ_stat.py'
Jan 22 13:50:56 compute-1 sudo[185558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:57 compute-1 python3.9[185560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:57 compute-1 sudo[185558]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-1 sudo[185681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiucpduncnqloaxrrralokhtyxqajpyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089856.543057-2310-18921804727397/AnsiballZ_copy.py'
Jan 22 13:50:57 compute-1 sudo[185681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:57 compute-1 python3.9[185683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089856.543057-2310-18921804727397/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:57 compute-1 sudo[185681]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:57 compute-1 ceph-mon[81715]: pgmap v676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:57 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:58 compute-1 sudo[185833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchhmeoukusdeemospvreoaohgsvvema ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089857.7539034-2310-90855280133990/AnsiballZ_stat.py'
Jan 22 13:50:58 compute-1 sudo[185833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.106774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858106840, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1745, "num_deletes": 252, "total_data_size": 3614054, "memory_usage": 3669136, "flush_reason": "Manual Compaction"}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858119798, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1454881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15707, "largest_seqno": 17447, "table_properties": {"data_size": 1449261, "index_size": 2567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17004, "raw_average_key_size": 21, "raw_value_size": 1435883, "raw_average_value_size": 1831, "num_data_blocks": 112, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089723, "oldest_key_time": 1769089723, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13075 microseconds, and 5612 cpu microseconds.
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119858) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1454881 bytes OK
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119882) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121289) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121305) EVENT_LOG_v1 {"time_micros": 1769089858121299, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121324) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3605715, prev total WAL file size 3605715, number of live WAL files 2.
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.122478) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1420KB)], [27(9797KB)]
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858122536, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 11487446, "oldest_snapshot_seqno": -1}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5352 keys, 8490400 bytes, temperature: kUnknown
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858179745, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8490400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8455832, "index_size": 20058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133897, "raw_average_key_size": 25, "raw_value_size": 8359955, "raw_average_value_size": 1562, "num_data_blocks": 828, "num_entries": 5352, "num_filter_entries": 5352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.180065) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8490400 bytes
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.181965) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.3 rd, 148.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.7) write-amplify(5.8) OK, records in: 5810, records dropped: 458 output_compression: NoCompression
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.182021) EVENT_LOG_v1 {"time_micros": 1769089858182000, "job": 14, "event": "compaction_finished", "compaction_time_micros": 57351, "compaction_time_cpu_micros": 22014, "output_level": 6, "num_output_files": 1, "total_output_size": 8490400, "num_input_records": 5810, "num_output_records": 5352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858183216, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858185151, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.122410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-1 python3.9[185835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:58 compute-1 sudo[185833]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:50:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:58 compute-1 sudo[185956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldetxztndxpweqzitfydbwjkghywvnfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089857.7539034-2310-90855280133990/AnsiballZ_copy.py'
Jan 22 13:50:58 compute-1 sudo[185956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:58 compute-1 python3.9[185958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089857.7539034-2310-90855280133990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:59 compute-1 sudo[185956]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:59 compute-1 ceph-mon[81715]: pgmap v677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:59 compute-1 sudo[186108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvzydtlmiyacgokbfxwcviaqwbljpay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089859.154633-2310-39767403656276/AnsiballZ_stat.py'
Jan 22 13:50:59 compute-1 sudo[186108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:59 compute-1 python3.9[186110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:59 compute-1 sudo[186108]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:00 compute-1 sudo[186231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bngaucxmcecykggzystkcntvqqnrixcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089859.154633-2310-39767403656276/AnsiballZ_copy.py'
Jan 22 13:51:00 compute-1 sudo[186231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:00 compute-1 python3.9[186233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089859.154633-2310-39767403656276/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:00 compute-1 sudo[186231]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:00 compute-1 sudo[186383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogixvflfgkhcxuvbifumsvysbeqlttxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089860.4921417-2310-94909633582820/AnsiballZ_stat.py'
Jan 22 13:51:00 compute-1 sudo[186383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:01 compute-1 python3.9[186385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:01 compute-1 sudo[186383]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:01 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:01 compute-1 ceph-mon[81715]: pgmap v678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:01 compute-1 sudo[186506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkatfefliyfxpagtrhampcgfilmdzdca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089860.4921417-2310-94909633582820/AnsiballZ_copy.py'
Jan 22 13:51:01 compute-1 sudo[186506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:01 compute-1 python3.9[186508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089860.4921417-2310-94909633582820/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:01 compute-1 sudo[186506]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:02 compute-1 sudo[186658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwfqutnbtflcbejfrfqkjiaopukovns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089861.9539995-2310-250795205863411/AnsiballZ_stat.py'
Jan 22 13:51:02 compute-1 sudo[186658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:02 compute-1 python3.9[186660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:02 compute-1 sudo[186658]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:02 compute-1 sudo[186781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stsdeyhawlysgjqhizwjwajjphieckmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089861.9539995-2310-250795205863411/AnsiballZ_copy.py'
Jan 22 13:51:02 compute-1 sudo[186781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:02 compute-1 python3.9[186783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089861.9539995-2310-250795205863411/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:02 compute-1 sudo[186781]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:03 compute-1 ceph-mon[81715]: pgmap v679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:03 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:03 compute-1 sudo[186933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ockbhomtiubybhtonrrgvwzkhcigvepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089863.1165297-2310-40228163846909/AnsiballZ_stat.py'
Jan 22 13:51:03 compute-1 sudo[186933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:03 compute-1 python3.9[186935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:03 compute-1 sudo[186933]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:04 compute-1 sudo[187056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhsqaubcuqaawovplyzllannymukspua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089863.1165297-2310-40228163846909/AnsiballZ_copy.py'
Jan 22 13:51:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:04 compute-1 sudo[187056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:04.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:04 compute-1 python3.9[187058]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089863.1165297-2310-40228163846909/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:04 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:04 compute-1 sudo[187056]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:04 compute-1 sudo[187218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqocjpddiqtzvyibcfhwqjntnsuadxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089864.417885-2310-88465903192870/AnsiballZ_stat.py'
Jan 22 13:51:04 compute-1 sudo[187218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:04 compute-1 podman[187182]: 2026-01-22 13:51:04.76095552 +0000 UTC m=+0.115575327 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 13:51:04 compute-1 python3.9[187227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:04 compute-1 sudo[187218]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:05 compute-1 sudo[187357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aokvlabfkqjikctohlhkebdoqmtfxsmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089864.417885-2310-88465903192870/AnsiballZ_copy.py'
Jan 22 13:51:05 compute-1 sudo[187357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:05 compute-1 ceph-mon[81715]: pgmap v680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:05 compute-1 python3.9[187359]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089864.417885-2310-88465903192870/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:05 compute-1 sudo[187357]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:05 compute-1 sudo[187509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrrwnlkununrzbfuaoylhnlibouiegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089865.5973155-2310-18496025185074/AnsiballZ_stat.py'
Jan 22 13:51:05 compute-1 sudo[187509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:06 compute-1 python3.9[187511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:06 compute-1 sudo[187509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:06 compute-1 sudo[187632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxieeksxhdctqhtznukdhgizkubjzroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089865.5973155-2310-18496025185074/AnsiballZ_copy.py'
Jan 22 13:51:06 compute-1 sudo[187632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:06 compute-1 python3.9[187634]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089865.5973155-2310-18496025185074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:06 compute-1 sudo[187632]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:07 compute-1 ceph-mon[81715]: pgmap v681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:08 compute-1 python3.9[187784]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:09 compute-1 sudo[187937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzppmsaavesxskzufpzpfqbawjjetbmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089868.5883737-2928-144895832652911/AnsiballZ_seboolean.py'
Jan 22 13:51:09 compute-1 sudo[187937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:09 compute-1 python3.9[187939]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 13:51:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:10 compute-1 ceph-mon[81715]: pgmap v682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:10 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:11 compute-1 ceph-mon[81715]: pgmap v683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:51:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:12.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:12 compute-1 sudo[187937]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:12 compute-1 sudo[188093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lynwdoxkoznupovbypatgniqkcyervzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089872.4616752-2952-235432751022773/AnsiballZ_copy.py'
Jan 22 13:51:12 compute-1 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 13:51:12 compute-1 sudo[188093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:13 compute-1 python3.9[188095]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:13 compute-1 sudo[188093]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:13 compute-1 sudo[188245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfisyffbnayohqzowuvfaoqxmsgmcnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089873.17773-2952-153578212131255/AnsiballZ_copy.py'
Jan 22 13:51:13 compute-1 sudo[188245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:13 compute-1 python3.9[188247]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:13 compute-1 sudo[188245]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:13 compute-1 ceph-mon[81715]: pgmap v684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:14.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:14 compute-1 sudo[188397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdbzrxbjtlrphahnnnevhiojxjmdxygw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089873.8653088-2952-53061183662843/AnsiballZ_copy.py'
Jan 22 13:51:14 compute-1 sudo[188397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:14 compute-1 python3.9[188399]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:14 compute-1 sudo[188397]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:14 compute-1 sudo[188549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwuxzkvgirnkfdwtvrkmvcwgftxdrmsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089874.524163-2952-173214639407873/AnsiballZ_copy.py'
Jan 22 13:51:14 compute-1 sudo[188549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:14 compute-1 python3.9[188551]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:14 compute-1 sudo[188549]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:15 compute-1 sudo[188701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amaoobfkvdjmflwbkxxuiacrlglosckv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089875.1119263-2952-167594755511208/AnsiballZ_copy.py'
Jan 22 13:51:15 compute-1 sudo[188701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:16 compute-1 ceph-mon[81715]: pgmap v685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:16 compute-1 python3.9[188703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:16 compute-1 sudo[188701]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:16 compute-1 sudo[188853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giojqfilopaaaiutextgghndewoysdha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089876.4625127-3060-50963469182734/AnsiballZ_copy.py'
Jan 22 13:51:16 compute-1 sudo[188853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:16 compute-1 python3.9[188855]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:16 compute-1 sudo[188853]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:17 compute-1 ceph-mon[81715]: pgmap v686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:17 compute-1 sudo[189005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxgktiqdocbugrsbpjarkmuyyeubynml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089877.0977864-3060-242906655335836/AnsiballZ_copy.py'
Jan 22 13:51:17 compute-1 sudo[189005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:17 compute-1 python3.9[189007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:17 compute-1 sudo[189005]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:18 compute-1 sudo[189157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewscfbtmszdchothssoahaptaioollo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089877.7389522-3060-23690498216245/AnsiballZ_copy.py'
Jan 22 13:51:18 compute-1 sudo[189157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:18 compute-1 python3.9[189159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:18 compute-1 sudo[189157]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:18 compute-1 sudo[189309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfumdkxehgotqpmrvwfkgecereruudeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089878.5299933-3060-127972098764289/AnsiballZ_copy.py'
Jan 22 13:51:18 compute-1 sudo[189309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:18 compute-1 ceph-mon[81715]: pgmap v687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:19 compute-1 python3.9[189311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:19 compute-1 sudo[189309]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:19 compute-1 sudo[189461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryluwqrerqzhrsdsqlqduvgxfrbcmjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089879.2509751-3060-17198982765069/AnsiballZ_copy.py'
Jan 22 13:51:19 compute-1 sudo[189461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:19 compute-1 python3.9[189463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:19 compute-1 sudo[189461]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:19 compute-1 podman[189464]: 2026-01-22 13:51:19.876150851 +0000 UTC m=+0.083862047 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 13:51:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:20.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:20 compute-1 sudo[189633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vydhukuofnpcnkcdwmxgmmxaansjfowg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089880.3869421-3168-35298528295654/AnsiballZ_systemd.py'
Jan 22 13:51:20 compute-1 sudo[189633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:20 compute-1 ceph-mon[81715]: pgmap v688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:20 compute-1 python3.9[189635]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:21 compute-1 systemd[1]: Reloading.
Jan 22 13:51:21 compute-1 systemd-rc-local-generator[189663]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:21 compute-1 systemd-sysv-generator[189666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:21 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 13:51:21 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 13:51:21 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 13:51:21 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 13:51:21 compute-1 systemd[1]: Starting libvirt logging daemon...
Jan 22 13:51:21 compute-1 systemd[1]: Started libvirt logging daemon.
Jan 22 13:51:21 compute-1 sudo[189633]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:21 compute-1 sudo[189826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwssxikornrenwncsdoaeayawaoflhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089881.6420724-3168-217146352608721/AnsiballZ_systemd.py'
Jan 22 13:51:21 compute-1 sudo[189826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:22 compute-1 python3.9[189828]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:22 compute-1 systemd[1]: Reloading.
Jan 22 13:51:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:22 compute-1 systemd-rc-local-generator[189854]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:22 compute-1 systemd-sysv-generator[189859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:22 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 13:51:22 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 13:51:22 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 13:51:22 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 13:51:22 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 13:51:22 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 13:51:22 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 13:51:22 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 22 13:51:22 compute-1 sudo[189826]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:23 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 13:51:23 compute-1 sudo[190043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcusxlqttnysuhlkacayudbkjwpwxmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089882.8324597-3168-190807882135268/AnsiballZ_systemd.py'
Jan 22 13:51:23 compute-1 sudo[190043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:23 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 13:51:23 compute-1 python3.9[190045]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:23 compute-1 systemd[1]: Reloading.
Jan 22 13:51:23 compute-1 ceph-mon[81715]: pgmap v689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:23 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:23 compute-1 systemd-rc-local-generator[190075]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:23 compute-1 systemd-sysv-generator[190079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:23 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 13:51:23 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 13:51:23 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 13:51:23 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 13:51:23 compute-1 systemd[1]: Starting libvirt proxy daemon...
Jan 22 13:51:23 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 13:51:23 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 13:51:23 compute-1 systemd[1]: Started libvirt proxy daemon.
Jan 22 13:51:23 compute-1 sudo[190043]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:24 compute-1 sudo[190264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwenrcukmeesgbnxxodzjlikpbarzzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089884.031513-3168-145306998222626/AnsiballZ_systemd.py'
Jan 22 13:51:24 compute-1 sudo[190264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:24 compute-1 python3.9[190266]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:24 compute-1 systemd[1]: Reloading.
Jan 22 13:51:24 compute-1 systemd-rc-local-generator[190290]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:24 compute-1 systemd-sysv-generator[190294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:24 compute-1 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78ef7930-963d-408a-ac09-8b3721c30352
Jan 22 13:51:24 compute-1 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 22 13:51:24 compute-1 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78ef7930-963d-408a-ac09-8b3721c30352
Jan 22 13:51:24 compute-1 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 22 13:51:24 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 13:51:24 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 13:51:24 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 13:51:25 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 13:51:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 13:51:25 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 13:51:25 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 13:51:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 13:51:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 13:51:25 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 13:51:25 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 13:51:25 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 22 13:51:25 compute-1 sudo[190264]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:25 compute-1 sudo[190480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwyedsgwigoitopiapkwaqdkuoagbizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089885.2607467-3168-194051880535725/AnsiballZ_systemd.py'
Jan 22 13:51:25 compute-1 sudo[190480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:25 compute-1 ceph-mon[81715]: pgmap v690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:25 compute-1 python3.9[190482]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:25 compute-1 systemd[1]: Reloading.
Jan 22 13:51:26 compute-1 systemd-rc-local-generator[190509]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:26 compute-1 systemd-sysv-generator[190513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:26 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 13:51:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:26 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 13:51:26 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 13:51:26 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 13:51:26 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 13:51:26 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 13:51:26 compute-1 systemd[1]: Starting libvirt secret daemon...
Jan 22 13:51:26 compute-1 systemd[1]: Started libvirt secret daemon.
Jan 22 13:51:26 compute-1 sudo[190480]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:26 compute-1 ceph-mon[81715]: pgmap v691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:27 compute-1 sudo[190692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hecciekmctdmxnewpuyjlnextqxvkxma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089886.8785617-3279-112820741754727/AnsiballZ_file.py'
Jan 22 13:51:27 compute-1 sudo[190692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:27 compute-1 python3.9[190694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:27 compute-1 sudo[190692]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:27 compute-1 sudo[190844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievcckwgfkklzxicjnhtmmptatwacmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089887.6332483-3303-155533672401817/AnsiballZ_find.py'
Jan 22 13:51:27 compute-1 sudo[190844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:28.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:28 compute-1 python3.9[190846]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:51:28 compute-1 sudo[190844]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:28 compute-1 sudo[190996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypqmtwjpfvsoqioyzzwikvxhlcfhgbzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089888.4544718-3327-180457346566804/AnsiballZ_command.py'
Jan 22 13:51:28 compute-1 sudo[190996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:28 compute-1 python3.9[190998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:28 compute-1 sudo[190996]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:30.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:30 compute-1 python3.9[191152]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:51:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:30 compute-1 ceph-mon[81715]: pgmap v692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-1 python3.9[191302]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-1 ceph-mon[81715]: pgmap v693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.625222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891625324, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 251, "total_data_size": 922151, "memory_usage": 934712, "flush_reason": "Manual Compaction"}
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 22 13:51:31 compute-1 python3.9[191423]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089890.7701848-3384-195326700602517/.source.xml follow=False _original_basename=secret.xml.j2 checksum=661e779e9ad9ab9796e6f7af12c5e6a2862cccb5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891871155, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 605960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17452, "largest_seqno": 18090, "table_properties": {"data_size": 602925, "index_size": 943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 19, "raw_value_size": 596493, "raw_average_value_size": 1469, "num_data_blocks": 42, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089858, "oldest_key_time": 1769089858, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 245947 microseconds, and 3997 cpu microseconds.
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.871207) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 605960 bytes OK
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.871228) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877130) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877175) EVENT_LOG_v1 {"time_micros": 1769089891877165, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 918515, prev total WAL file size 918515, number of live WAL files 2.
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.878137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(591KB)], [30(8291KB)]
Jan 22 13:51:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891878227, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9096360, "oldest_snapshot_seqno": -1}
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 5247 keys, 7419371 bytes, temperature: kUnknown
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892016701, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 7419371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7386391, "index_size": 18790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132589, "raw_average_key_size": 25, "raw_value_size": 7292995, "raw_average_value_size": 1389, "num_data_blocks": 771, "num_entries": 5247, "num_filter_entries": 5247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.017023) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7419371 bytes
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.033169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.6 rd, 53.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.2) OK, records in: 5758, records dropped: 511 output_compression: NoCompression
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.033215) EVENT_LOG_v1 {"time_micros": 1769089892033197, "job": 16, "event": "compaction_finished", "compaction_time_micros": 138574, "compaction_time_cpu_micros": 21703, "output_level": 6, "num_output_files": 1, "total_output_size": 7419371, "num_input_records": 5758, "num_output_records": 5247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892034171, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892037720, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.878026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:32 compute-1 sudo[191573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmcvlwcqqehnqbrxeftuexuqgxjftuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089892.051717-3429-133161258259500/AnsiballZ_command.py'
Jan 22 13:51:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:32 compute-1 sudo[191573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:32 compute-1 python3.9[191575]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 088fe176-0106-5401-803c-2da38b73b76a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:32 compute-1 polkitd[43403]: Registered Authentication Agent for unix-process:191577:374487 (system bus name :1.1816 [pkttyagent --process 191577 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:32 compute-1 polkitd[43403]: Unregistered Authentication Agent for unix-process:191577:374487 (system bus name :1.1816, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:32 compute-1 polkitd[43403]: Registered Authentication Agent for unix-process:191576:374486 (system bus name :1.1817 [pkttyagent --process 191576 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:32 compute-1 polkitd[43403]: Unregistered Authentication Agent for unix-process:191576:374486 (system bus name :1.1817, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:32 compute-1 sudo[191573]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:33 compute-1 python3.9[191737]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:33 compute-1 sudo[191887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gecsbnfmadlrumvgsqtxbccgapgausqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089893.6067994-3477-232232944258831/AnsiballZ_command.py'
Jan 22 13:51:33 compute-1 sudo[191887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:34 compute-1 sudo[191887]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:34 compute-1 ceph-mon[81715]: pgmap v694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:34 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:34 compute-1 sudo[192040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wermafyoysjjzwkathqbqduaqlnkzmjk ; FSID=088fe176-0106-5401-803c-2da38b73b76a KEY=AQCZJnJpAAAAABAAqtkA7doM+5EIMhShr22e9w== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089894.3374362-3501-141051491669582/AnsiballZ_command.py'
Jan 22 13:51:34 compute-1 sudo[192040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:34 compute-1 polkitd[43403]: Registered Authentication Agent for unix-process:192043:374717 (system bus name :1.1820 [pkttyagent --process 192043 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:34 compute-1 polkitd[43403]: Unregistered Authentication Agent for unix-process:192043:374717 (system bus name :1.1820, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:34 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 13:51:34 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.043s CPU time.
Jan 22 13:51:34 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 13:51:35 compute-1 podman[192049]: 2026-01-22 13:51:35.062889092 +0000 UTC m=+0.091935056 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 13:51:35 compute-1 sudo[192040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:35 compute-1 sudo[192224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plsohsahadkvdfoqdyeirorllerdwopt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089895.4868336-3525-194423764069132/AnsiballZ_copy.py'
Jan 22 13:51:35 compute-1 sudo[192224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:35 compute-1 ceph-mon[81715]: pgmap v695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:35 compute-1 python3.9[192226]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:35 compute-1 sudo[192224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:36 compute-1 sudo[192376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dntkfwlayeulhieipvnszjxfngthrqpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089896.1780925-3549-108645698587377/AnsiballZ_stat.py'
Jan 22 13:51:36 compute-1 sudo[192376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:36 compute-1 python3.9[192378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:36 compute-1 sudo[192376]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:36 compute-1 ceph-mon[81715]: pgmap v696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:37 compute-1 sudo[192499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auyrwjomboctgczeovsyencpysdxwwlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089896.1780925-3549-108645698587377/AnsiballZ_copy.py'
Jan 22 13:51:37 compute-1 sudo[192499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:37 compute-1 python3.9[192501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089896.1780925-3549-108645698587377/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:37 compute-1 sudo[192499]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:38 compute-1 sudo[192651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzofryypiyancexqyrbekfjwbvzvzwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089897.7561944-3597-248526681995651/AnsiballZ_file.py'
Jan 22 13:51:38 compute-1 sudo[192651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:38 compute-1 python3.9[192653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:38 compute-1 sudo[192651]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:51:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:51:38 compute-1 sudo[192803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfchmqhhorfkwxsflgazpemrbmlgjxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089898.4657161-3621-34689371138292/AnsiballZ_stat.py'
Jan 22 13:51:38 compute-1 sudo[192803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:39 compute-1 python3.9[192805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:39 compute-1 sudo[192803]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:39 compute-1 sudo[192881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpegiyhdhsrdenmrcmiibcipgarpbzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089898.4657161-3621-34689371138292/AnsiballZ_file.py'
Jan 22 13:51:39 compute-1 sudo[192881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:39 compute-1 python3.9[192883]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:39 compute-1 sudo[192881]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:40 compute-1 sudo[193033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhzwvxwylqqrtznrzasypwyzvkctccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089899.859249-3657-242165567839760/AnsiballZ_stat.py'
Jan 22 13:51:40 compute-1 sudo[193033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:40 compute-1 ceph-mon[81715]: pgmap v697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:40 compute-1 python3.9[193035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:40 compute-1 sudo[193033]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:40 compute-1 sudo[193111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylktxbvebkbdtjnrfsuiwzqtkbsipdms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089899.859249-3657-242165567839760/AnsiballZ_file.py'
Jan 22 13:51:40 compute-1 sudo[193111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:40 compute-1 python3.9[193113]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7qojvd0v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:40 compute-1 sudo[193111]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:41 compute-1 sudo[193263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcvgfloyylotexudiraeudxdliskwuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089901.042742-3693-93445567939275/AnsiballZ_stat.py'
Jan 22 13:51:41 compute-1 sudo[193263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:41 compute-1 python3.9[193265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:41 compute-1 sudo[193263]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:41 compute-1 ceph-mon[81715]: pgmap v698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:41 compute-1 sudo[193341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsilaxexsqprvnfmijmakxjwgkjnecfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089901.042742-3693-93445567939275/AnsiballZ_file.py'
Jan 22 13:51:41 compute-1 sudo[193341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:42.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:42 compute-1 python3.9[193343]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:42 compute-1 sudo[193341]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:42 compute-1 sudo[193493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moktkhefjmphextiprczxqvovxzyamon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089902.507228-3732-247151133786739/AnsiballZ_command.py'
Jan 22 13:51:42 compute-1 sudo[193493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:42 compute-1 python3.9[193495]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:43 compute-1 sudo[193493]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:43 compute-1 ceph-mon[81715]: pgmap v699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:43 compute-1 sudo[193646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timvqqcdscuywbrshsrwaypkaygfqese ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089903.3589-3756-269590123844653/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:51:43 compute-1 sudo[193646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:44 compute-1 python3[193648]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:51:44 compute-1 sudo[193646]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:44 compute-1 sudo[193798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyixecgstezwfwwuodqrvwmlgrpofqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089904.2321699-3780-98434696472383/AnsiballZ_stat.py'
Jan 22 13:51:44 compute-1 sudo[193798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:44 compute-1 python3.9[193800]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:44 compute-1 sudo[193798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:45 compute-1 sudo[193877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlmhzefscysjjlgamffitzowolrhsayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089904.2321699-3780-98434696472383/AnsiballZ_file.py'
Jan 22 13:51:45 compute-1 sudo[193877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:45 compute-1 python3.9[193879]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:45 compute-1 sudo[193877]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:45 compute-1 sudo[194029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czotbjubgspelprkgggiiroqjhinzyug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089905.5316596-3816-75644588008420/AnsiballZ_stat.py'
Jan 22 13:51:45 compute-1 sudo[194029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:46 compute-1 ceph-mon[81715]: pgmap v700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:46 compute-1 python3.9[194031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:46 compute-1 sudo[194029]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:46 compute-1 sudo[194154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlbighggbhzdkrjhzardypvlcybioysa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089905.5316596-3816-75644588008420/AnsiballZ_copy.py'
Jan 22 13:51:46 compute-1 sudo[194154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:46 compute-1 python3.9[194156]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089905.5316596-3816-75644588008420/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:46 compute-1 sudo[194154]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:47 compute-1 ceph-mon[81715]: pgmap v701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:47 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:47 compute-1 sudo[194306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yihxzluqfeotuujxzcisgouqzltxahgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089906.9800751-3861-70380718752175/AnsiballZ_stat.py'
Jan 22 13:51:47 compute-1 sudo[194306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.426 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.426 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:51:47 compute-1 python3.9[194308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:47 compute-1 sudo[194306]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:47 compute-1 sudo[194384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxzkczooisiehwhbbfoksolerqvfwdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089906.9800751-3861-70380718752175/AnsiballZ_file.py'
Jan 22 13:51:47 compute-1 sudo[194384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:47 compute-1 python3.9[194386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:48 compute-1 sudo[194384]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:48 compute-1 sudo[194536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnspbokdzqagyovqrorwjujzsyicfxgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089908.2170935-3897-226269413721136/AnsiballZ_stat.py'
Jan 22 13:51:48 compute-1 sudo[194536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:48 compute-1 python3.9[194538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:48 compute-1 sudo[194536]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:48 compute-1 sudo[194614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blnaveejgsuutgivlfydmdytifxiusfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089908.2170935-3897-226269413721136/AnsiballZ_file.py'
Jan 22 13:51:48 compute-1 sudo[194614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:49 compute-1 python3.9[194616]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:49 compute-1 sudo[194614]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:49 compute-1 sudo[194766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsdvfasjydqltkyildcbbtfctblsdhhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089909.5097935-3933-51116767838228/AnsiballZ_stat.py'
Jan 22 13:51:49 compute-1 sudo[194766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:50 compute-1 podman[194769]: 2026-01-22 13:51:50.084786471 +0000 UTC m=+0.067413970 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 13:51:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:50 compute-1 python3.9[194768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:50 compute-1 ceph-mon[81715]: pgmap v702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:50 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:50 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:50 compute-1 sudo[194766]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:50 compute-1 auditd[704]: Audit daemon rotating log files
Jan 22 13:51:51 compute-1 sudo[194911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsqsfubwawcrrzqureqedlwyejrzxhqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089909.5097935-3933-51116767838228/AnsiballZ_copy.py'
Jan 22 13:51:51 compute-1 sudo[194911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:51 compute-1 python3.9[194913]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089909.5097935-3933-51116767838228/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:51 compute-1 sudo[194911]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:51 compute-1 sudo[195063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdvaoqfnzrsnqiqbtbkdcsqdxleumhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089911.654455-3978-278227412190360/AnsiballZ_file.py'
Jan 22 13:51:51 compute-1 sudo[195063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:52 compute-1 python3.9[195065]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:52 compute-1 sudo[195063]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-1 ceph-mon[81715]: pgmap v703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:52 compute-1 sudo[195215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avpckpzdagmagyugkajodgdvnuzpcxza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089912.472877-4002-186930950126015/AnsiballZ_command.py'
Jan 22 13:51:52 compute-1 sudo[195215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:52 compute-1 python3.9[195217]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:53 compute-1 sudo[195215]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-1 sudo[195221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:53 compute-1 sudo[195221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-1 sudo[195221]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-1 sudo[195246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:51:53 compute-1 sudo[195246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-1 sudo[195246]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-1 sudo[195288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:53 compute-1 sudo[195288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-1 sudo[195288]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-1 sudo[195320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:51:53 compute-1 sudo[195320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-1 ceph-mon[81715]: pgmap v704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:53 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:53 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:53 compute-1 sudo[195320]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-1 sudo[195500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzyizwhgsgpjincqgunevtitxbetvnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089913.4459624-4026-245611826029243/AnsiballZ_blockinfile.py'
Jan 22 13:51:53 compute-1 sudo[195500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:54 compute-1 python3.9[195502]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:54 compute-1 sudo[195500]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:54 compute-1 sudo[195652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkaimhbfugpkoiqlbvltlbibrppovdrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089914.6181993-4054-17877520914528/AnsiballZ_command.py'
Jan 22 13:51:54 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:54 compute-1 sudo[195652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:55 compute-1 python3.9[195654]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:55 compute-1 sudo[195652]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:55 compute-1 sudo[195805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjmotumioqzvwtakdjqgikvluswwtxgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089915.4644928-4078-179050886228476/AnsiballZ_stat.py'
Jan 22 13:51:55 compute-1 sudo[195805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:55 compute-1 python3.9[195807]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:51:56 compute-1 sudo[195805]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:56 compute-1 ceph-mon[81715]: pgmap v705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:51:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:51:56 compute-1 sudo[195959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vczdjlmicouhmkfyivmjavcdrrvfjznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089916.4557405-4101-270070749987705/AnsiballZ_command.py'
Jan 22 13:51:56 compute-1 sudo[195959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:56 compute-1 python3.9[195961]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:57 compute-1 sudo[195959]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:57 compute-1 sudo[196114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdimplpjjhdqkpveoouwhacpeosfvfsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089917.5242794-4126-246279649825386/AnsiballZ_file.py'
Jan 22 13:51:57 compute-1 sudo[196114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:58 compute-1 python3.9[196116]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:58 compute-1 sudo[196114]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:58 compute-1 ceph-mon[81715]: pgmap v706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:51:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:51:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:51:58 compute-1 sudo[196266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigqzabcmluqqjhkdyhmprvkscsyiimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089918.2726147-4149-210332659786780/AnsiballZ_stat.py'
Jan 22 13:51:58 compute-1 sudo[196266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:58 compute-1 python3.9[196268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:58 compute-1 sudo[196266]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:59 compute-1 sudo[196389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkafrcmwoqjowfdinsuqssvjpdzawyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089918.2726147-4149-210332659786780/AnsiballZ_copy.py'
Jan 22 13:51:59 compute-1 sudo[196389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:59 compute-1 python3.9[196391]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089918.2726147-4149-210332659786780/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:59 compute-1 sudo[196389]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:51:59 compute-1 ceph-mon[81715]: pgmap v707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:51:59 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:00 compute-1 sudo[196541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbsywvhcvykrczgygrcotaxxsdmudot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089919.7043467-4194-201113204815382/AnsiballZ_stat.py'
Jan 22 13:52:00 compute-1 sudo[196541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:00 compute-1 python3.9[196543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:00 compute-1 sudo[196541]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:00 compute-1 sudo[196664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuwvzglnirqrgpjchqraeyahrsxuucqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089919.7043467-4194-201113204815382/AnsiballZ_copy.py'
Jan 22 13:52:00 compute-1 sudo[196664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:00 compute-1 python3.9[196666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089919.7043467-4194-201113204815382/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:00 compute-1 sudo[196664]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:52:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:52:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:52:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:02 compute-1 ceph-mon[81715]: pgmap v708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:02 compute-1 sudo[196816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elgfuqoqofcjermstthlmpzrhutxktpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089921.278591-4239-263544070791140/AnsiballZ_stat.py'
Jan 22 13:52:02 compute-1 sudo[196816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:03 compute-1 python3.9[196818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:03 compute-1 sudo[196816]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:03 compute-1 ceph-mon[81715]: pgmap v709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:03 compute-1 sudo[196939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coylpnjegosvleanbdjwzcsucwuaooig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089921.278591-4239-263544070791140/AnsiballZ_copy.py'
Jan 22 13:52:03 compute-1 sudo[196939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:03 compute-1 python3.9[196941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089921.278591-4239-263544070791140/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:03 compute-1 sudo[196939]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:04 compute-1 sudo[197091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvebbkiehlvwkfpvlrdbnduypqphrtfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089924.0446117-4284-146952740438656/AnsiballZ_systemd.py'
Jan 22 13:52:04 compute-1 sudo[197091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:04 compute-1 python3.9[197093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:04 compute-1 systemd[1]: Reloading.
Jan 22 13:52:04 compute-1 systemd-sysv-generator[197124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:04 compute-1 systemd-rc-local-generator[197121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:05 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 13:52:05 compute-1 sudo[197091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:06 compute-1 podman[197232]: 2026-01-22 13:52:06.130916729 +0000 UTC m=+0.111988069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 13:52:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:06 compute-1 sudo[197307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmkotgmnwcujwupfkxgwahebslwohqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089925.832018-4308-52541864969663/AnsiballZ_systemd.py'
Jan 22 13:52:06 compute-1 sudo[197307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:06 compute-1 python3.9[197309]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:52:06 compute-1 systemd[1]: Reloading.
Jan 22 13:52:06 compute-1 systemd-sysv-generator[197341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:06 compute-1 systemd-rc-local-generator[197337]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:08 compute-1 systemd[1]: Reloading.
Jan 22 13:52:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:08 compute-1 systemd-rc-local-generator[197375]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:08 compute-1 systemd-sysv-generator[197378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:09 compute-1 ceph-mon[81715]: pgmap v710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-1 sudo[197307]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:09 compute-1 ceph-mon[81715]: pgmap v711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-1 ceph-mon[81715]: pgmap v712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:09 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:09 compute-1 sshd-session[140392]: Connection closed by 192.168.122.30 port 38382
Jan 22 13:52:09 compute-1 sshd-session[140389]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:52:09 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 13:52:09 compute-1 systemd[1]: session-48.scope: Consumed 3min 34.813s CPU time.
Jan 22 13:52:09 compute-1 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Jan 22 13:52:09 compute-1 systemd-logind[787]: Removed session 48.
Jan 22 13:52:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 13:52:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 13:52:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:12.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-1 ceph-mon[81715]: pgmap v713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-1 sudo[197407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:13 compute-1 sudo[197407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:13 compute-1 sudo[197407]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:13 compute-1 sudo[197432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:52:13 compute-1 sudo[197432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:13 compute-1 sudo[197432]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:14 compute-1 ceph-mon[81715]: pgmap v714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:14 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:15 compute-1 sshd-session[197457]: Accepted publickey for zuul from 192.168.122.30 port 42638 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:52:15 compute-1 systemd-logind[787]: New session 49 of user zuul.
Jan 22 13:52:15 compute-1 systemd[1]: Started Session 49 of User zuul.
Jan 22 13:52:15 compute-1 sshd-session[197457]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:52:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:15 compute-1 ceph-mon[81715]: pgmap v715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:16 compute-1 python3.9[197610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:52:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:16 compute-1 ceph-mon[81715]: pgmap v716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:17 compute-1 python3.9[197764]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:52:17 compute-1 network[197781]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:52:17 compute-1 network[197782]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:52:17 compute-1 network[197783]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:52:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:18.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.670469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938670555, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 760, "num_deletes": 250, "total_data_size": 1333262, "memory_usage": 1355048, "flush_reason": "Manual Compaction"}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938691238, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 868137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18095, "largest_seqno": 18850, "table_properties": {"data_size": 864527, "index_size": 1390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8225, "raw_average_key_size": 17, "raw_value_size": 856944, "raw_average_value_size": 1862, "num_data_blocks": 61, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089892, "oldest_key_time": 1769089892, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 20821 microseconds, and 4157 cpu microseconds.
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.691296) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 868137 bytes OK
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.691325) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694007) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694061) EVENT_LOG_v1 {"time_micros": 1769089938694048, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1329084, prev total WAL file size 1345479, number of live WAL files 2.
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.695317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(847KB)], [33(7245KB)]
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938695452, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8287508, "oldest_snapshot_seqno": -1}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5195 keys, 7744323 bytes, temperature: kUnknown
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938942087, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7744323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711336, "index_size": 18925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133631, "raw_average_key_size": 25, "raw_value_size": 7618468, "raw_average_value_size": 1466, "num_data_blocks": 757, "num_entries": 5195, "num_filter_entries": 5195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:52:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.942428) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7744323 bytes
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.948459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.6 rd, 31.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(18.5) write-amplify(8.9) OK, records in: 5707, records dropped: 512 output_compression: NoCompression
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.948498) EVENT_LOG_v1 {"time_micros": 1769089938948483, "job": 18, "event": "compaction_finished", "compaction_time_micros": 246746, "compaction_time_cpu_micros": 19853, "output_level": 6, "num_output_files": 1, "total_output_size": 7744323, "num_input_records": 5707, "num_output_records": 5195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938948830, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938950261, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.695052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:19 compute-1 ceph-mon[81715]: pgmap v717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:19 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:20 compute-1 podman[197863]: 2026-01-22 13:52:20.199531662 +0000 UTC m=+0.068990183 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:52:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:21 compute-1 ceph-mon[81715]: pgmap v718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:23 compute-1 sudo[198072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btycivjsnfajzeehfvqdfcocrnmiseyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089942.7354832-102-186047660178934/AnsiballZ_setup.py'
Jan 22 13:52:23 compute-1 sudo[198072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:23 compute-1 python3.9[198074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:52:23 compute-1 sudo[198072]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:23 compute-1 ceph-mon[81715]: pgmap v719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:24 compute-1 sudo[198156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxxwktmlytdhlqpwxttknbczkdohwrym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089942.7354832-102-186047660178934/AnsiballZ_dnf.py'
Jan 22 13:52:24 compute-1 sudo[198156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:24 compute-1 python3.9[198158]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:52:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:25 compute-1 ceph-mon[81715]: pgmap v720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:28.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:28 compute-1 ceph-mon[81715]: pgmap v721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:29 compute-1 ceph-mon[81715]: pgmap v722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:29 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-1 sudo[198156]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:30.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-1 sudo[198309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzaeffivxdwrkecqtuexchdjovvguojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089950.3699846-138-172995423982797/AnsiballZ_stat.py'
Jan 22 13:52:30 compute-1 sudo[198309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:30 compute-1 python3.9[198311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:52:30 compute-1 sudo[198309]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:31 compute-1 ceph-mon[81715]: pgmap v723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:31 compute-1 sudo[198461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbmgdahapigifafnopfupaycvlmtpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089951.4664128-168-149311263797834/AnsiballZ_command.py'
Jan 22 13:52:31 compute-1 sudo[198461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:32 compute-1 python3.9[198463]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:52:32 compute-1 sudo[198461]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:32 compute-1 ceph-mon[81715]: pgmap v724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:32 compute-1 sudo[198614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjgpdxxffuaqiwboxwbwtcvsdazgdut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089952.635937-198-68316900673704/AnsiballZ_stat.py'
Jan 22 13:52:32 compute-1 sudo[198614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:33 compute-1 python3.9[198616]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:52:33 compute-1 sudo[198614]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:33 compute-1 sudo[198766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kayslymjjcetqftrktehauvweyncqjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089953.4840565-222-254635545145206/AnsiballZ_command.py'
Jan 22 13:52:33 compute-1 sudo[198766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:33 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:33 compute-1 python3.9[198768]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:52:34 compute-1 sudo[198766]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:34 compute-1 sudo[198919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfyvhyzhawexzokwfjqvznkiqfsqvbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089954.2326922-246-234136023076544/AnsiballZ_stat.py'
Jan 22 13:52:34 compute-1 sudo[198919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:34 compute-1 python3.9[198921]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:34 compute-1 sudo[198919]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:35 compute-1 sudo[199042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixxunjrwktapeaauptdtqkqgrpufkgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089954.2326922-246-234136023076544/AnsiballZ_copy.py'
Jan 22 13:52:35 compute-1 sudo[199042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:35 compute-1 ceph-mon[81715]: pgmap v725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:35 compute-1 python3.9[199044]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089954.2326922-246-234136023076544/.source.iscsi _original_basename=.z1svhdm_ follow=False checksum=c04402da62a45aeb02eef40454c1ebe55b259f0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:35 compute-1 sudo[199042]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:36 compute-1 sudo[199194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rihyigdemfnolrxcsdwizvndcocnzfnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089955.7284398-291-51960814735176/AnsiballZ_file.py'
Jan 22 13:52:36 compute-1 sudo[199194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:36 compute-1 podman[199196]: 2026-01-22 13:52:36.312536197 +0000 UTC m=+0.094393663 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 22 13:52:36 compute-1 python3.9[199197]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:52:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:52:36 compute-1 sudo[199194]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:37 compute-1 sudo[199372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztokxynfeeiihgdnokuqzhjvujckcjkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089956.8310616-315-140438241018899/AnsiballZ_lineinfile.py'
Jan 22 13:52:37 compute-1 sudo[199372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:37 compute-1 python3.9[199374]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:37 compute-1 sudo[199372]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:37 compute-1 ceph-mon[81715]: pgmap v726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:38 compute-1 sudo[199524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlwvgxyrscezzcmcemfaofjblwcjyjjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089957.930056-342-138053773474216/AnsiballZ_systemd_service.py'
Jan 22 13:52:38 compute-1 sudo[199524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:38 compute-1 python3.9[199526]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:39 compute-1 ceph-mon[81715]: pgmap v727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:39 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 13:52:40 compute-1 sudo[199524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:40 compute-1 sudo[199680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxklyqssqfktwtsnzcezdatvteunzqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089960.2250826-366-216992292294535/AnsiballZ_systemd_service.py'
Jan 22 13:52:40 compute-1 sudo[199680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:40 compute-1 python3.9[199682]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:40 compute-1 systemd[1]: Reloading.
Jan 22 13:52:41 compute-1 systemd-rc-local-generator[199708]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:41 compute-1 systemd-sysv-generator[199711]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:41 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 13:52:41 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 22 13:52:41 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 13:52:41 compute-1 systemd[1]: Started Open-iSCSI.
Jan 22 13:52:41 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 13:52:41 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 13:52:41 compute-1 sudo[199680]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:41 compute-1 ceph-mon[81715]: pgmap v728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:42.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:52:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:52:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-1 ceph-mon[81715]: pgmap v729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-1 python3.9[199880]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:52:42 compute-1 network[199897]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:52:42 compute-1 network[199898]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:52:42 compute-1 network[199899]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:52:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:44 compute-1 ceph-mon[81715]: pgmap v730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:44 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:47 compute-1 ceph-mon[81715]: pgmap v731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.425 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:52:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:48 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:49 compute-1 ceph-mon[81715]: pgmap v732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:50 compute-1 sudo[200169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzyvyfqyueutbijaqzlxzczyuztaqzbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089969.6873963-435-280818788047902/AnsiballZ_dnf.py'
Jan 22 13:52:50 compute-1 sudo[200169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:50 compute-1 python3.9[200171]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:52:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:51 compute-1 podman[200173]: 2026-01-22 13:52:51.106016072 +0000 UTC m=+0.084650865 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 13:52:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:52 compute-1 ceph-mon[81715]: pgmap v733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:52.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:52 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:52:52 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:52:52 compute-1 systemd[1]: Reloading.
Jan 22 13:52:52 compute-1 systemd-rc-local-generator[200235]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:53 compute-1 systemd-sysv-generator[200241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:53 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:52:53 compute-1 ceph-mon[81715]: pgmap v734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:53 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:52:53 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:52:53 compute-1 systemd[1]: run-re352d2953b404036b3ee02486a8957de.service: Deactivated successfully.
Jan 22 13:52:53 compute-1 sudo[200169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:55 compute-1 ceph-mon[81715]: pgmap v735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-1 sudo[200502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmikkpslnnngsuqnijpeudtcmzyckcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089975.5844486-462-154184664660490/AnsiballZ_file.py'
Jan 22 13:52:56 compute-1 sudo[200502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:57 compute-1 python3.9[200504]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 13:52:57 compute-1 sudo[200502]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:57 compute-1 ceph-mon[81715]: pgmap v736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:58 compute-1 sudo[200654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzvlhphjercirnhrsbccqyxplmeugfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089977.489076-486-215509831517726/AnsiballZ_modprobe.py'
Jan 22 13:52:58 compute-1 sudo[200654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:58 compute-1 python3.9[200656]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 13:52:58 compute-1 sudo[200654]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:58 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:52:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:52:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:58 compute-1 ceph-mon[81715]: pgmap v737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:58 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:59 compute-1 sudo[200811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwseyaxbdbpxnatdbvunrprwvgygvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089978.8205345-510-159001209055462/AnsiballZ_stat.py'
Jan 22 13:52:59 compute-1 sudo[200811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:59 compute-1 python3.9[200813]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:59 compute-1 sudo[200811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:59 compute-1 sudo[200934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrakfnkzbeojdfnxetedfgmxuvqgjzuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089978.8205345-510-159001209055462/AnsiballZ_copy.py'
Jan 22 13:52:59 compute-1 sudo[200934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:00 compute-1 python3.9[200936]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089978.8205345-510-159001209055462/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:00 compute-1 sudo[200934]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:00 compute-1 sudo[201086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrxatgewplatzdkldgtfszpwhoscailc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089980.3867097-558-155255670184010/AnsiballZ_lineinfile.py'
Jan 22 13:53:00 compute-1 sudo[201086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:00 compute-1 python3.9[201088]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:00 compute-1 sudo[201086]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:01 compute-1 ceph-mon[81715]: pgmap v738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:01 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:01 compute-1 sudo[201238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmqakffnazhwwlnloeigtukxxqaljxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089981.263828-582-5789922757091/AnsiballZ_systemd.py'
Jan 22 13:53:01 compute-1 sudo[201238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:02 compute-1 python3.9[201240]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:02 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 13:53:02 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 22 13:53:02 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 22 13:53:02 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:53:02 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:53:02 compute-1 sudo[201238]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:02.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:03 compute-1 sudo[201394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-artxddswudzqlqonuxltrevloumegfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089982.7646184-606-84027825661040/AnsiballZ_command.py'
Jan 22 13:53:03 compute-1 sudo[201394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:03 compute-1 python3.9[201396]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:03 compute-1 sudo[201394]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:03 compute-1 ceph-mon[81715]: pgmap v739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:03 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:04 compute-1 sudo[201547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dapimphmpjkzncvyqkpkyomjjxxnikas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089983.8832905-636-149651645262779/AnsiballZ_stat.py'
Jan 22 13:53:04 compute-1 sudo[201547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:04 compute-1 python3.9[201549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:53:04 compute-1 sudo[201547]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:05 compute-1 ceph-mon[81715]: pgmap v740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:05 compute-1 sudo[201699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oettiniwjgihxchbobbhccvwkmiddivq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089984.8463316-663-265881234612033/AnsiballZ_stat.py'
Jan 22 13:53:05 compute-1 sudo[201699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:05 compute-1 python3.9[201701]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:53:05 compute-1 sudo[201699]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:05 compute-1 sudo[201822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwatwboylkombqxyndrifnaizfrucobz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089984.8463316-663-265881234612033/AnsiballZ_copy.py'
Jan 22 13:53:05 compute-1 sudo[201822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:05 compute-1 python3.9[201824]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089984.8463316-663-265881234612033/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:06 compute-1 sudo[201822]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:06 compute-1 sudo[201984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnwvfeslaskdmfaszufctnecgwjgiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089986.6337771-709-232479024816830/AnsiballZ_command.py'
Jan 22 13:53:06 compute-1 sudo[201984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:07 compute-1 podman[201948]: 2026-01-22 13:53:07.020679791 +0000 UTC m=+0.103632854 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:53:07 compute-1 python3.9[201990]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:07 compute-1 sudo[201984]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:07 compute-1 ceph-mon[81715]: pgmap v741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:07 compute-1 sudo[202153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnxxxkddsnsclktoilcaqyuybhxlpdbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089987.4456904-732-60071997343835/AnsiballZ_lineinfile.py'
Jan 22 13:53:07 compute-1 sudo[202153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:08 compute-1 python3.9[202155]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:08 compute-1 sudo[202153]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:08.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:08.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:08 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:08 compute-1 sudo[202305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvxtruzccmwqpwmdiytzyrmhrhschjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089988.26382-756-157360540481246/AnsiballZ_replace.py'
Jan 22 13:53:08 compute-1 sudo[202305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:08 compute-1 python3.9[202307]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:08 compute-1 sudo[202305]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:09 compute-1 sudo[202457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyupqwwwnxthcdcvgayqxcjwmzypnrff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089989.2538862-780-116555737689026/AnsiballZ_replace.py'
Jan 22 13:53:09 compute-1 sudo[202457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:09 compute-1 ceph-mon[81715]: pgmap v742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:09 compute-1 python3.9[202459]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:09 compute-1 sudo[202457]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:10 compute-1 sudo[202609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsdnfbpxlafzxbjgwqrarteyqugkfhbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089990.332796-807-65792637545074/AnsiballZ_lineinfile.py'
Jan 22 13:53:10 compute-1 sudo[202609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:10 compute-1 python3.9[202611]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:10 compute-1 sudo[202609]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:11 compute-1 sudo[202761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzfstaxtgiuqavqwlckyfqalosbtpvfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089991.101091-807-229400839043971/AnsiballZ_lineinfile.py'
Jan 22 13:53:11 compute-1 sudo[202761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:11 compute-1 python3.9[202763]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:11 compute-1 sudo[202761]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:11 compute-1 ceph-mon[81715]: pgmap v743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-1 sudo[202913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljozoxatrcicpdtfqftdefnswfxrphbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089991.8289826-807-221349260227860/AnsiballZ_lineinfile.py'
Jan 22 13:53:12 compute-1 sudo[202913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:12 compute-1 python3.9[202915]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:12 compute-1 sudo[202913]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-1 sudo[203065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wutxdhtjdvxobidjapabchkznnsewwyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089992.4949217-807-222326727239610/AnsiballZ_lineinfile.py'
Jan 22 13:53:12 compute-1 sudo[203065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:12 compute-1 python3.9[203067]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:12 compute-1 sudo[203065]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:13 compute-1 ceph-mon[81715]: pgmap v744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:13 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:13 compute-1 sudo[203217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmvsowtikbefhktflfevdklzzmlughpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089993.625582-894-279152394338183/AnsiballZ_stat.py'
Jan 22 13:53:13 compute-1 sudo[203217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:14 compute-1 sudo[203220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:14 compute-1 sudo[203220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-1 sudo[203220]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-1 sudo[203245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:53:14 compute-1 sudo[203245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-1 sudo[203245]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-1 python3.9[203219]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:53:14 compute-1 sudo[203270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:14 compute-1 sudo[203270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-1 sudo[203270]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-1 sudo[203217]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-1 sudo[203297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:53:14 compute-1 sudo[203297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:14 compute-1 podman[203418]: 2026-01-22 13:53:14.660461994 +0000 UTC m=+0.064049171 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 13:53:14 compute-1 podman[203418]: 2026-01-22 13:53:14.757040984 +0000 UTC m=+0.160628161 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:53:14 compute-1 ceph-mon[81715]: pgmap v745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:15 compute-1 sudo[203630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdpfbvfgyhuxpnqalivgykqvcyhkmzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089994.709741-918-95885117493742/AnsiballZ_command.py'
Jan 22 13:53:15 compute-1 sudo[203630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:15 compute-1 sudo[203297]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 python3.9[203635]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:15 compute-1 sudo[203630]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 sudo[203693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:15 compute-1 sudo[203693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:15 compute-1 sudo[203693]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 sudo[203719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:53:15 compute-1 sudo[203719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:15 compute-1 sudo[203719]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 sudo[203744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:15 compute-1 sudo[203744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:15 compute-1 sudo[203744]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 sudo[203769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:53:15 compute-1 sudo[203769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:15 compute-1 sudo[203950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyelarnkoyazmggvmlkjowhwcpnkupi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089995.6554224-945-153794593447010/AnsiballZ_systemd_service.py'
Jan 22 13:53:15 compute-1 sudo[203769]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-1 sudo[203950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:16 compute-1 python3.9[203952]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:16 compute-1 systemd[1]: Listening on multipathd control socket.
Jan 22 13:53:16 compute-1 sudo[203950]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:16 compute-1 sudo[204106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltvxxwoifolbucczqyabzscxnrraesqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089996.638887-969-7139683983617/AnsiballZ_systemd_service.py'
Jan 22 13:53:16 compute-1 sudo[204106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:17 compute-1 python3.9[204108]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:17 compute-1 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 13:53:17 compute-1 udevadm[204113]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 13:53:17 compute-1 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 13:53:17 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 13:53:17 compute-1 multipathd[204116]: --------start up--------
Jan 22 13:53:17 compute-1 multipathd[204116]: read /etc/multipath.conf
Jan 22 13:53:17 compute-1 multipathd[204116]: path checkers start up
Jan 22 13:53:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:53:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:53:17 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 13:53:17 compute-1 sudo[204106]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:18 compute-1 ceph-mon[81715]: pgmap v746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:53:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:53:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:53:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:18 compute-1 sudo[204273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfrolzqbrnzmqucgiybpbcriuaephrbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089998.4944775-1005-185389692099053/AnsiballZ_file.py'
Jan 22 13:53:18 compute-1 sudo[204273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:18 compute-1 python3.9[204275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 13:53:19 compute-1 sudo[204273]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:19 compute-1 ceph-mon[81715]: pgmap v747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:19 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:19 compute-1 sudo[204425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivndpnupuphbfdkragbfywtrcmhaqeul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089999.3554997-1029-215088004183809/AnsiballZ_modprobe.py'
Jan 22 13:53:19 compute-1 sudo[204425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:19 compute-1 python3.9[204427]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 13:53:19 compute-1 kernel: Key type psk registered
Jan 22 13:53:19 compute-1 sudo[204425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:20.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:20 compute-1 sudo[204588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqrbevxxwlojxitrwpqbqdlkirdqdjyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090000.20843-1053-212470424857449/AnsiballZ_stat.py'
Jan 22 13:53:20 compute-1 sudo[204588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:20 compute-1 python3.9[204590]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:53:20 compute-1 sudo[204588]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:21 compute-1 sudo[204711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iisvokdhihlnvrbbduxekhyfcbzvpuks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090000.20843-1053-212470424857449/AnsiballZ_copy.py'
Jan 22 13:53:21 compute-1 sudo[204711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:21 compute-1 podman[204713]: 2026-01-22 13:53:21.273886674 +0000 UTC m=+0.060692310 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:53:21 compute-1 ceph-mon[81715]: pgmap v748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:21 compute-1 python3.9[204714]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769090000.20843-1053-212470424857449/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:21 compute-1 sudo[204711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:22 compute-1 sudo[204882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtetaaqikqwajfhhjhxxvdrjasvgdzxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090001.822338-1102-56963418956895/AnsiballZ_lineinfile.py'
Jan 22 13:53:22 compute-1 sudo[204882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:22 compute-1 python3.9[204884]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:22 compute-1 sudo[204882]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:22 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 13:53:22 compute-1 sudo[205035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djjeihgikqkglqwdfturfzxtzgxdlgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090002.5951009-1125-280476897535304/AnsiballZ_systemd.py'
Jan 22 13:53:22 compute-1 sudo[205035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:23 compute-1 python3.9[205037]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:23 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 13:53:23 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 22 13:53:23 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 22 13:53:23 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:53:23 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:53:23 compute-1 sudo[205035]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-1 ceph-mon[81715]: pgmap v749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:23 compute-1 sudo[205066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:23 compute-1 sudo[205066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:23 compute-1 sudo[205066]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-1 sudo[205091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:53:23 compute-1 sudo[205091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:23 compute-1 sudo[205091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 13:53:24 compute-1 sudo[205242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epojpezlxxupmlxqeskcdvaviaxpkdsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090003.693197-1149-140333924396105/AnsiballZ_dnf.py'
Jan 22 13:53:24 compute-1 sudo[205242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:24 compute-1 python3.9[205244]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:53:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:25 compute-1 ceph-mon[81715]: pgmap v750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:26 compute-1 systemd[1]: Reloading.
Jan 22 13:53:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:26 compute-1 systemd-rc-local-generator[205273]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:26 compute-1 systemd-sysv-generator[205276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:26 compute-1 systemd[1]: Reloading.
Jan 22 13:53:27 compute-1 systemd-sysv-generator[205316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:27 compute-1 systemd-rc-local-generator[205311]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:27 compute-1 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 13:53:27 compute-1 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 13:53:27 compute-1 lvm[205360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:53:27 compute-1 lvm[205360]: VG ceph_vg0 finished
Jan 22 13:53:27 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:53:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:53:27 compute-1 systemd[1]: Reloading.
Jan 22 13:53:27 compute-1 ceph-mon[81715]: pgmap v751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:27 compute-1 systemd-rc-local-generator[205406]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:27 compute-1 systemd-sysv-generator[205410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:53:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:28 compute-1 sudo[205242]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:28 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:53:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:53:28 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.631s CPU time.
Jan 22 13:53:29 compute-1 systemd[1]: run-r11a9d633f44c428092a4f53412932160.service: Deactivated successfully.
Jan 22 13:53:29 compute-1 sudo[206708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkzthpegviexsyyskpyeegaetkbjiux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090009.0065777-1173-74020070509466/AnsiballZ_systemd_service.py'
Jan 22 13:53:29 compute-1 sudo[206708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:29 compute-1 python3.9[206710]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:29 compute-1 systemd[1]: Stopping Open-iSCSI...
Jan 22 13:53:29 compute-1 iscsid[199722]: iscsid shutting down.
Jan 22 13:53:29 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 13:53:29 compute-1 systemd[1]: Stopped Open-iSCSI.
Jan 22 13:53:29 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 13:53:29 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 22 13:53:29 compute-1 systemd[1]: Started Open-iSCSI.
Jan 22 13:53:29 compute-1 sudo[206708]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:30 compute-1 ceph-mon[81715]: pgmap v752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:30 compute-1 sudo[206864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrzbpgdtiowdulupracwxfsnicspaak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090010.305365-1197-21293140928377/AnsiballZ_systemd_service.py'
Jan 22 13:53:30 compute-1 sudo[206864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:30 compute-1 python3.9[206866]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:30 compute-1 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 13:53:30 compute-1 multipathd[204116]: exit (signal)
Jan 22 13:53:30 compute-1 multipathd[204116]: --------shut down-------
Jan 22 13:53:30 compute-1 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 13:53:30 compute-1 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 13:53:30 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 13:53:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:31 compute-1 ceph-mon[81715]: pgmap v753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:31 compute-1 multipathd[206872]: --------start up--------
Jan 22 13:53:31 compute-1 multipathd[206872]: read /etc/multipath.conf
Jan 22 13:53:31 compute-1 multipathd[206872]: path checkers start up
Jan 22 13:53:31 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 13:53:31 compute-1 sudo[206864]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:31 compute-1 python3.9[207029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:53:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:32 compute-1 sudo[207183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcveysoiftbrrqvfdowvfoekljiwkcca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090012.6908333-1249-280663240629676/AnsiballZ_file.py'
Jan 22 13:53:32 compute-1 sudo[207183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:33 compute-1 python3.9[207185]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:33 compute-1 sudo[207183]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:33 compute-1 ceph-mon[81715]: pgmap v754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:33 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:34 compute-1 sudo[207335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltspefssebijwskvicnpwgilkcioids ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090013.7467024-1282-156142912484922/AnsiballZ_systemd_service.py'
Jan 22 13:53:34 compute-1 sudo[207335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:34 compute-1 python3.9[207337]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:53:34 compute-1 systemd[1]: Reloading.
Jan 22 13:53:34 compute-1 systemd-rc-local-generator[207365]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:34 compute-1 systemd-sysv-generator[207368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:34 compute-1 sudo[207335]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:35 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 13:53:35 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 13:53:35 compute-1 python3.9[207524]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:53:35 compute-1 network[207541]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:53:35 compute-1 network[207542]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:53:35 compute-1 network[207543]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:53:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:36 compute-1 ceph-mon[81715]: pgmap v755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:37 compute-1 podman[207574]: 2026-01-22 13:53:37.21496307 +0000 UTC m=+0.137708881 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 13:53:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:37 compute-1 ceph-mon[81715]: pgmap v756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:39 compute-1 ceph-mon[81715]: pgmap v757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:40 compute-1 ceph-mon[81715]: pgmap v758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:42 compute-1 sudo[207838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmibodsbepcnaynobnfmonmvfvytprld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090022.1178691-1339-88338584810705/AnsiballZ_systemd_service.py'
Jan 22 13:53:42 compute-1 sudo[207838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:42 compute-1 python3.9[207840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:42 compute-1 sudo[207838]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:43 compute-1 ceph-mon[81715]: pgmap v759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:43 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:43 compute-1 sudo[207991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fystltollpyxaiwkvzwtjxkhtvosfxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090023.3404071-1339-182457987757492/AnsiballZ_systemd_service.py'
Jan 22 13:53:43 compute-1 sudo[207991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:43 compute-1 python3.9[207993]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:43 compute-1 sudo[207991]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:44 compute-1 sudo[208144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbmqqtxrmmcumujxjovrmohjpawmlwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090024.1091692-1339-11743311564576/AnsiballZ_systemd_service.py'
Jan 22 13:53:44 compute-1 sudo[208144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:44 compute-1 python3.9[208146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:44 compute-1 sudo[208144]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:45 compute-1 sudo[208297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emnrphtrppdmqmyjshdxafafyvzbwpem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090024.9256709-1339-212929020534003/AnsiballZ_systemd_service.py'
Jan 22 13:53:45 compute-1 sudo[208297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:45 compute-1 python3.9[208299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:45 compute-1 sudo[208297]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:46 compute-1 sudo[208450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnqtosmgxfhhqrwpfrvahhoifwuxdwzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090025.815851-1339-119868968671378/AnsiballZ_systemd_service.py'
Jan 22 13:53:46 compute-1 sudo[208450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:46 compute-1 python3.9[208452]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:46 compute-1 sudo[208450]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:47 compute-1 sudo[208603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huetanpskzsehjfwbmpmdqwmlaaeemjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090026.627274-1339-194769334937212/AnsiballZ_systemd_service.py'
Jan 22 13:53:47 compute-1 sudo[208603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:47 compute-1 ceph-mon[81715]: pgmap v760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:47 compute-1 python3.9[208605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:47 compute-1 sudo[208603]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:53:47 compute-1 sudo[208756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflzmrjhtswhzwlmzkatudfrycyzvvky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090027.5489304-1339-8994341307564/AnsiballZ_systemd_service.py'
Jan 22 13:53:47 compute-1 sudo[208756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:48 compute-1 python3.9[208758]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:48 compute-1 sudo[208756]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:48 compute-1 ceph-mon[81715]: pgmap v761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:48 compute-1 sudo[208909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxzimijamluwtogffdsxgtywbdwacoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090028.3373952-1339-31015743263544/AnsiballZ_systemd_service.py'
Jan 22 13:53:48 compute-1 sudo[208909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:48 compute-1 python3.9[208911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:49 compute-1 ceph-mon[81715]: pgmap v762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:49 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:49 compute-1 sudo[208909]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:50 compute-1 sudo[209062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpiprrmssyresrsnselxlpscrhmsdriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090030.6746018-1516-24881356018452/AnsiballZ_file.py'
Jan 22 13:53:50 compute-1 sudo[209062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:51 compute-1 python3.9[209064]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:51 compute-1 sudo[209062]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:51 compute-1 ceph-mon[81715]: pgmap v763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:51 compute-1 sudo[209224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdhcznzspcgmeaywqjvexdhnuqofwgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090031.3165352-1516-116210288856810/AnsiballZ_file.py'
Jan 22 13:53:51 compute-1 sudo[209224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:51 compute-1 podman[209188]: 2026-01-22 13:53:51.617458442 +0000 UTC m=+0.060234514 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 13:53:51 compute-1 python3.9[209233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:51 compute-1 sudo[209224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:52 compute-1 sudo[209386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzrvznpljtwpbxmqgvjzjysnzwqaiwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090031.94018-1516-43663123328884/AnsiballZ_file.py'
Jan 22 13:53:52 compute-1 sudo[209386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:52 compute-1 python3.9[209388]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:52 compute-1 sudo[209386]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:52 compute-1 sudo[209538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbaakgadsijmjtlmfprtjvdsnzezfixu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090032.5724807-1516-245088325769572/AnsiballZ_file.py'
Jan 22 13:53:52 compute-1 sudo[209538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:53 compute-1 python3.9[209540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:53 compute-1 sudo[209538]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:53 compute-1 ceph-mon[81715]: pgmap v764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:53 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:53 compute-1 sudo[209690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljutlyfqqskhhpbinxfiqiawaapswjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090033.204532-1516-51448636887190/AnsiballZ_file.py'
Jan 22 13:53:53 compute-1 sudo[209690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:53 compute-1 python3.9[209692]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:53 compute-1 sudo[209690]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:54 compute-1 sudo[209842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beliheybusfheecblauppplaatswnguv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090034.0491595-1516-269800256792155/AnsiballZ_file.py'
Jan 22 13:53:54 compute-1 sudo[209842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:54 compute-1 python3.9[209844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:54 compute-1 sudo[209842]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:55 compute-1 sudo[209994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqgwmukfnmlqpofmwcfsqqudbxzliztf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090034.8117504-1516-155124130191950/AnsiballZ_file.py'
Jan 22 13:53:55 compute-1 sudo[209994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:55 compute-1 python3.9[209996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:55 compute-1 sudo[209994]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:55 compute-1 ceph-mon[81715]: pgmap v765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:55 compute-1 sudo[210146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjdlbnmijsjzdxlsqqbxhxberunyfyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090035.4549575-1516-77230727503599/AnsiballZ_file.py'
Jan 22 13:53:55 compute-1 sudo[210146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:55 compute-1 python3.9[210148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:55 compute-1 sudo[210146]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:57 compute-1 sudo[210298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwvaawtigcijzjfaputljxktopofgef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090037.0130932-1687-164549390703369/AnsiballZ_file.py'
Jan 22 13:53:57 compute-1 sudo[210298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:57 compute-1 python3.9[210300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:57 compute-1 sudo[210298]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:58 compute-1 sudo[210450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztatehuyfaqtascrwkzmdjqvrgwpvfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090037.7801826-1687-50526026417850/AnsiballZ_file.py'
Jan 22 13:53:58 compute-1 sudo[210450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:58 compute-1 python3.9[210452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:58 compute-1 sudo[210450]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:58.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:53:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:58 compute-1 sudo[210602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axfiwsaebwghwhenqfbcrnrzoccukdpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090038.4687421-1687-58928097990771/AnsiballZ_file.py'
Jan 22 13:53:58 compute-1 sudo[210602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:58 compute-1 python3.9[210604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:58 compute-1 sudo[210602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:59 compute-1 ceph-mon[81715]: pgmap v766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-1 ceph-mon[81715]: pgmap v767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:59 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-1 sudo[210754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epywkreafeedoicnayfiapmejvbkfpax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090039.0947556-1687-46982551257034/AnsiballZ_file.py'
Jan 22 13:53:59 compute-1 sudo[210754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:59 compute-1 python3.9[210756]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:59 compute-1 sudo[210754]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:00 compute-1 sudo[210906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzgvygfwnhigttrypjrxqxpzdgilxhiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090039.7695456-1687-226878806790127/AnsiballZ_file.py'
Jan 22 13:54:00 compute-1 sudo[210906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:00 compute-1 python3.9[210908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:00 compute-1 sudo[210906]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:00 compute-1 sudo[211058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liekrqjlgtcbuigsguluzhowefslbwev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090040.3935893-1687-125620438094610/AnsiballZ_file.py'
Jan 22 13:54:00 compute-1 sudo[211058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:00 compute-1 python3.9[211060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:00 compute-1 sudo[211058]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:01 compute-1 ceph-mon[81715]: pgmap v768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:01 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:01 compute-1 sudo[211210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdvwswydnbnxpgbfwdiuecqzflgiqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090041.0626261-1687-119384912006196/AnsiballZ_file.py'
Jan 22 13:54:01 compute-1 sudo[211210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:01 compute-1 python3.9[211212]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:01 compute-1 sudo[211210]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:01 compute-1 sudo[211362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrnhezjocnvymjgtjpikstowzvtudmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090041.6858718-1687-196504681030409/AnsiballZ_file.py'
Jan 22 13:54:01 compute-1 sudo[211362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:02 compute-1 python3.9[211364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:02 compute-1 sudo[211362]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:03 compute-1 ceph-mon[81715]: pgmap v769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:03 compute-1 sudo[211514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjmwsybkekjsmweovielnwmvctmumof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090043.430256-1861-52966163545374/AnsiballZ_command.py'
Jan 22 13:54:03 compute-1 sudo[211514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:03 compute-1 python3.9[211516]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:03 compute-1 sudo[211514]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:04 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:04 compute-1 python3.9[211668]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:54:05 compute-1 ceph-mon[81715]: pgmap v770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:05 compute-1 sudo[211818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twpqrpbebxvyfhghsjxbpmlurxasejwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090045.3797896-1916-146077213607020/AnsiballZ_systemd_service.py'
Jan 22 13:54:05 compute-1 sudo[211818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:05 compute-1 python3.9[211820]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:54:05 compute-1 systemd[1]: Reloading.
Jan 22 13:54:06 compute-1 systemd-rc-local-generator[211852]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:54:06 compute-1 systemd-sysv-generator[211855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:54:06 compute-1 sudo[211818]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:07 compute-1 sudo[212006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtaqvvluhwhijuedjukbjxruaybhqglq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090046.7163908-1939-43722604806744/AnsiballZ_command.py'
Jan 22 13:54:07 compute-1 sudo[212006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:07 compute-1 python3.9[212008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:07 compute-1 sudo[212006]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:07 compute-1 ceph-mon[81715]: pgmap v771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:07 compute-1 sudo[212172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadjbgrbqkcmwzulwpwskjijnouthvuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090047.3641572-1939-138679492877130/AnsiballZ_command.py'
Jan 22 13:54:07 compute-1 sudo[212172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:07 compute-1 podman[212133]: 2026-01-22 13:54:07.70505362 +0000 UTC m=+0.095939984 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:54:07 compute-1 python3.9[212180]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:07 compute-1 sudo[212172]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:08 compute-1 sudo[212338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckrteqkndezaagqvjgpqlgexmhetnden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090048.0214965-1939-5488263721098/AnsiballZ_command.py'
Jan 22 13:54:08 compute-1 sudo[212338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:08 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:08 compute-1 python3.9[212340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:08 compute-1 sudo[212338]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:09 compute-1 sudo[212491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzwsireteryvrxufsijzvuuvhlniaes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090048.7199106-1939-270479600943212/AnsiballZ_command.py'
Jan 22 13:54:09 compute-1 sudo[212491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:09 compute-1 python3.9[212493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:09 compute-1 sudo[212491]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:09 compute-1 ceph-mon[81715]: pgmap v772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:09 compute-1 sudo[212644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqswpohwxibhdkhrlfpoplnggphetzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090049.3671818-1939-455885153250/AnsiballZ_command.py'
Jan 22 13:54:09 compute-1 sudo[212644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:09 compute-1 python3.9[212646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:11 compute-1 sudo[212644]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:11 compute-1 ceph-mon[81715]: pgmap v773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:11 compute-1 sudo[212797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlyzeggewwisxconxgoejfyfhtngrloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090051.1908674-1939-112607699595901/AnsiballZ_command.py'
Jan 22 13:54:11 compute-1 sudo[212797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:11 compute-1 python3.9[212799]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:11 compute-1 sudo[212797]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:12 compute-1 sudo[212950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itzooptawqyzrqjmzepfkxygydoxueex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090051.8301063-1939-142206159838320/AnsiballZ_command.py'
Jan 22 13:54:12 compute-1 sudo[212950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:12 compute-1 python3.9[212952]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:12 compute-1 sudo[212950]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:54:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:54:12 compute-1 sudo[213103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfqgubpmgzpbauliexfzgcmlghlxpelf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090052.4345894-1939-33697630240957/AnsiballZ_command.py'
Jan 22 13:54:12 compute-1 sudo[213103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:12 compute-1 python3.9[213105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:12 compute-1 sudo[213103]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.327756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053327820, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1667, "num_deletes": 256, "total_data_size": 3216962, "memory_usage": 3274680, "flush_reason": "Manual Compaction"}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053341928, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2115001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18855, "largest_seqno": 20517, "table_properties": {"data_size": 2108467, "index_size": 3414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16221, "raw_average_key_size": 20, "raw_value_size": 2094087, "raw_average_value_size": 2620, "num_data_blocks": 150, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089938, "oldest_key_time": 1769089938, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 14237 microseconds, and 5823 cpu microseconds.
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342002) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2115001 bytes OK
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342026) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343459) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343475) EVENT_LOG_v1 {"time_micros": 1769090053343469, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343495) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3209033, prev total WAL file size 3209033, number of live WAL files 2.
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2065KB)], [36(7562KB)]
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053344366, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 9859324, "oldest_snapshot_seqno": -1}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5467 keys, 9664481 bytes, temperature: kUnknown
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053405954, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9664481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9628150, "index_size": 21565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141077, "raw_average_key_size": 25, "raw_value_size": 9528863, "raw_average_value_size": 1742, "num_data_blocks": 864, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.406333) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9664481 bytes
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.408008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 156.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 5994, records dropped: 527 output_compression: NoCompression
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.408031) EVENT_LOG_v1 {"time_micros": 1769090053408019, "job": 20, "event": "compaction_finished", "compaction_time_micros": 61714, "compaction_time_cpu_micros": 22186, "output_level": 6, "num_output_files": 1, "total_output_size": 9664481, "num_input_records": 5994, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053408590, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053410513, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-1 ceph-mon[81715]: pgmap v774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:13 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:14.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:14 compute-1 sudo[213256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khfmkhukqiygpsgucnysbvxxqbbyowws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090054.1296322-2146-279836369600984/AnsiballZ_file.py'
Jan 22 13:54:14 compute-1 sudo[213256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:14 compute-1 python3.9[213258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:14 compute-1 sudo[213256]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:15 compute-1 sudo[213408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwewisevategyflqgbalxpgxhwwjgakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090054.9471416-2146-116386816862802/AnsiballZ_file.py'
Jan 22 13:54:15 compute-1 sudo[213408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:15 compute-1 python3.9[213410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:15 compute-1 sudo[213408]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:15 compute-1 ceph-mon[81715]: pgmap v775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:15 compute-1 sudo[213560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdakjtbteqzrruupofoyjmtdpzsyrvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090055.5995886-2146-163597001008874/AnsiballZ_file.py'
Jan 22 13:54:15 compute-1 sudo[213560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:16 compute-1 python3.9[213562]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:16 compute-1 sudo[213560]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:16.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:16 compute-1 sudo[213712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buavfymdczlhfydwdydlendvtagrnxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090056.6375961-2212-137474703372926/AnsiballZ_file.py'
Jan 22 13:54:16 compute-1 sudo[213712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:17 compute-1 python3.9[213714]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:17 compute-1 sudo[213712]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:17 compute-1 ceph-mon[81715]: pgmap v776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:17 compute-1 sudo[213864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtstnaxvsbiuvuyrpiwyyuesbpylwexc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090057.264885-2212-25456254627956/AnsiballZ_file.py'
Jan 22 13:54:17 compute-1 sudo[213864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:17 compute-1 python3.9[213866]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:17 compute-1 sudo[213864]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:18 compute-1 sudo[214016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfumqsuacgcnlrilagjwwgwkyqeietth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090057.9298716-2212-194058488016637/AnsiballZ_file.py'
Jan 22 13:54:18 compute-1 sudo[214016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:18 compute-1 python3.9[214018]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:18 compute-1 sudo[214016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:18.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:18 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:18 compute-1 sudo[214168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhahidbtckovrxnjdofvtkfdiykjbli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090058.5370839-2212-256102658547750/AnsiballZ_file.py'
Jan 22 13:54:18 compute-1 sudo[214168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:18 compute-1 python3.9[214170]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:19 compute-1 sudo[214168]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:19 compute-1 sudo[214320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqfsqypyvfpggplbztnejpwjsdgwrzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090059.1861403-2212-274999725258711/AnsiballZ_file.py'
Jan 22 13:54:19 compute-1 sudo[214320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:19 compute-1 ceph-mon[81715]: pgmap v777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:19 compute-1 python3.9[214322]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:19 compute-1 sudo[214320]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:20 compute-1 sudo[214472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbrkrlqipupnphwnmvlmbaaeqrnlmiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090059.8087828-2212-77885576922979/AnsiballZ_file.py'
Jan 22 13:54:20 compute-1 sudo[214472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:20 compute-1 python3.9[214474]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:20 compute-1 sudo[214472]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:20.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:20 compute-1 sudo[214624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhcfairvswtfmwdwupckutjeeadgjae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090060.4621909-2212-116569891227478/AnsiballZ_file.py'
Jan 22 13:54:20 compute-1 sudo[214624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:20 compute-1 python3.9[214626]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:20 compute-1 sudo[214624]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:21 compute-1 ceph-mon[81715]: pgmap v778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:22 compute-1 podman[214651]: 2026-01-22 13:54:22.062925456 +0000 UTC m=+0.054315662 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 13:54:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:22 compute-1 ceph-mon[81715]: pgmap v779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:23 compute-1 sudo[214670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:23 compute-1 sudo[214670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-1 sudo[214670]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-1 sudo[214695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:54:23 compute-1 sudo[214695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-1 sudo[214695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-1 sudo[214720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:23 compute-1 sudo[214720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-1 sudo[214720]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:23 compute-1 sudo[214745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:54:23 compute-1 sudo[214745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:24.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:24 compute-1 sudo[214745]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:24 compute-1 ceph-mon[81715]: pgmap v780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:54:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:54:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:26.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:26 compute-1 ceph-mon[81715]: pgmap v781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:27 compute-1 sudo[214926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvfqcliwhechwgsyodzpjwqzftizeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090067.0492928-2537-122951999775667/AnsiballZ_getent.py'
Jan 22 13:54:27 compute-1 sudo[214926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:27 compute-1 python3.9[214928]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 13:54:27 compute-1 sudo[214926]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:28 compute-1 sudo[215079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sokljehfgfwrnpggoafaxdoaeochhvli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090068.098134-2561-161691446761214/AnsiballZ_group.py'
Jan 22 13:54:28 compute-1 sudo[215079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:28 compute-1 python3.9[215081]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:54:28 compute-1 groupadd[215082]: group added to /etc/group: name=nova, GID=42436
Jan 22 13:54:28 compute-1 groupadd[215082]: group added to /etc/gshadow: name=nova
Jan 22 13:54:28 compute-1 groupadd[215082]: new group: name=nova, GID=42436
Jan 22 13:54:28 compute-1 sudo[215079]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:28 compute-1 ceph-mon[81715]: pgmap v782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:28 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:29 compute-1 sudo[215237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmyldzrbkdlwbpbirxqqlwbugnsncarq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090069.4126608-2585-58179366082611/AnsiballZ_user.py'
Jan 22 13:54:29 compute-1 sudo[215237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:30 compute-1 python3.9[215239]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:54:30 compute-1 useradd[215241]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 22 13:54:30 compute-1 useradd[215241]: add 'nova' to group 'libvirt'
Jan 22 13:54:30 compute-1 useradd[215241]: add 'nova' to shadow group 'libvirt'
Jan 22 13:54:30 compute-1 sudo[215237]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:30.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:30 compute-1 ceph-mon[81715]: pgmap v783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:31 compute-1 sshd-session[215272]: Accepted publickey for zuul from 192.168.122.30 port 45734 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:54:31 compute-1 systemd-logind[787]: New session 50 of user zuul.
Jan 22 13:54:31 compute-1 systemd[1]: Started Session 50 of User zuul.
Jan 22 13:54:31 compute-1 sshd-session[215272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:54:31 compute-1 sshd-session[215275]: Received disconnect from 192.168.122.30 port 45734:11: disconnected by user
Jan 22 13:54:31 compute-1 sshd-session[215275]: Disconnected from user zuul 192.168.122.30 port 45734
Jan 22 13:54:31 compute-1 sshd-session[215272]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:54:31 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Jan 22 13:54:31 compute-1 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Jan 22 13:54:31 compute-1 systemd-logind[787]: Removed session 50.
Jan 22 13:54:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:32 compute-1 python3.9[215425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:32.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.934020) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072934082, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 547, "num_deletes": 251, "total_data_size": 649288, "memory_usage": 660448, "flush_reason": "Manual Compaction"}
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072938941, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20522, "largest_seqno": 21064, "table_properties": {"data_size": 413181, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7315, "raw_average_key_size": 19, "raw_value_size": 407344, "raw_average_value_size": 1092, "num_data_blocks": 33, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090053, "oldest_key_time": 1769090053, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4962 microseconds, and 1902 cpu microseconds.
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.938979) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 415944 bytes OK
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.939008) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.940992) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941012) EVENT_LOG_v1 {"time_micros": 1769090072941006, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941033) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 646048, prev total WAL file size 646048, number of live WAL files 2.
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(406KB)], [39(9437KB)]
Jan 22 13:54:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072941697, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 10080425, "oldest_snapshot_seqno": -1}
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5325 keys, 8372498 bytes, temperature: kUnknown
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073006212, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8372498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338056, "index_size": 19996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 138930, "raw_average_key_size": 26, "raw_value_size": 8242044, "raw_average_value_size": 1547, "num_data_blocks": 796, "num_entries": 5325, "num_filter_entries": 5325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.006530) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8372498 bytes
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.008221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.0 rd, 129.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(44.4) write-amplify(20.1) OK, records in: 5840, records dropped: 515 output_compression: NoCompression
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.008247) EVENT_LOG_v1 {"time_micros": 1769090073008235, "job": 22, "event": "compaction_finished", "compaction_time_micros": 64614, "compaction_time_cpu_micros": 20758, "output_level": 6, "num_output_files": 1, "total_output_size": 8372498, "num_input_records": 5840, "num_output_records": 5325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073008468, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073010452, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:33 compute-1 ceph-mon[81715]: pgmap v784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:33 compute-1 python3.9[215546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090072.1129656-2660-18250776302670/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:33 compute-1 sudo[215547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:33 compute-1 sudo[215547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:33 compute-1 sudo[215547]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:33 compute-1 sudo[215572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:54:33 compute-1 sudo[215572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:33 compute-1 sudo[215572]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:33 compute-1 python3.9[215746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:34 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:34 compute-1 python3.9[215822]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:54:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Cumulative writes: 6587 writes, 26K keys, 6587 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6587 writes, 1237 syncs, 5.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 560 writes, 844 keys, 560 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 560 writes, 276 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:54:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:34 compute-1 python3.9[215972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:35 compute-1 ceph-mon[81715]: pgmap v785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:35 compute-1 python3.9[216093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090074.3972583-2660-63576531896094/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:36 compute-1 python3.9[216243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:36.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:36 compute-1 python3.9[216364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090075.664014-2660-22902125891366/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:37 compute-1 ceph-mon[81715]: pgmap v786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:37 compute-1 python3.9[216514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:37 compute-1 python3.9[216635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090076.8212724-2660-19267187141685/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:37 compute-1 podman[216636]: 2026-01-22 13:54:37.96430828 +0000 UTC m=+0.094722350 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:54:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:38 compute-1 python3.9[216812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:39 compute-1 ceph-mon[81715]: pgmap v787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:39 compute-1 python3.9[216933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090078.0048456-2660-203431774405702/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:40 compute-1 sudo[217083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsmrhzaartjfhujouajavfndowijiolt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090080.2999897-2909-75350283421118/AnsiballZ_file.py'
Jan 22 13:54:40 compute-1 sudo[217083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:40 compute-1 python3.9[217085]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:40 compute-1 sudo[217083]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:41 compute-1 ceph-mon[81715]: pgmap v788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:41 compute-1 sudo[217235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfhewfankxisqkihthypixgkmtoisrts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090081.1007955-2933-207533567851050/AnsiballZ_copy.py'
Jan 22 13:54:41 compute-1 sudo[217235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:41 compute-1 python3.9[217237]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:41 compute-1 sudo[217235]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:42 compute-1 sudo[217387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxlkojtayajgkcuttnjlzdcduifuunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090081.860735-2957-30330976950810/AnsiballZ_stat.py'
Jan 22 13:54:42 compute-1 sudo[217387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:42 compute-1 python3.9[217389]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:54:42 compute-1 sudo[217387]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:43 compute-1 sudo[217539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otsxtsafuzmbfjkieyuabwszgwycbidq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090082.6020703-2982-100454722767971/AnsiballZ_stat.py'
Jan 22 13:54:43 compute-1 sudo[217539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:43 compute-1 python3.9[217541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:43 compute-1 sudo[217539]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:43 compute-1 ceph-mon[81715]: pgmap v789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:43 compute-1 sudo[217662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txvppdgndqmfojwigytduqhehypgsqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090082.6020703-2982-100454722767971/AnsiballZ_copy.py'
Jan 22 13:54:43 compute-1 sudo[217662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:43 compute-1 python3.9[217664]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769090082.6020703-2982-100454722767971/.source _original_basename=.n8ce4_a6 follow=False checksum=bf1e2aecb466d047605f32ca3ded8b7745e19a70 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 13:54:43 compute-1 sudo[217662]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:44.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:44 compute-1 python3.9[217816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:54:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:45 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:45 compute-1 python3.9[217968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:46 compute-1 ceph-mon[81715]: pgmap v790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:46 compute-1 python3.9[218089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090085.216663-3059-197895201000764/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:47 compute-1 python3.9[218239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:54:47 compute-1 ceph-mon[81715]: pgmap v791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:47 compute-1 python3.9[218360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090086.5527017-3104-173864243702517/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:48 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:48 compute-1 sudo[218510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fystcomsfqbhwjjsorwcwafyyczsajja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090088.3270752-3155-88067453551300/AnsiballZ_container_config_data.py'
Jan 22 13:54:48 compute-1 sudo[218510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:48 compute-1 python3.9[218512]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 13:54:48 compute-1 sudo[218510]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:49 compute-1 ceph-mon[81715]: pgmap v792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:49 compute-1 sudo[218662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hswemftqsvlammygluvzhwsmbwemjksi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090089.4673305-3188-219388840167615/AnsiballZ_container_config_hash.py'
Jan 22 13:54:49 compute-1 sudo[218662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:50 compute-1 python3.9[218664]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:54:50 compute-1 sudo[218662]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:51 compute-1 sudo[218814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skndmjoimfhpevxzhwfkdigyiiyqdvsa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769090090.584535-3218-214863640241349/AnsiballZ_edpm_container_manage.py'
Jan 22 13:54:51 compute-1 sudo[218814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:51 compute-1 python3[218816]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:54:52 compute-1 ceph-mon[81715]: pgmap v793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:52 compute-1 ceph-mon[81715]: pgmap v794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:53 compute-1 podman[218848]: 2026-01-22 13:54:53.104788351 +0000 UTC m=+0.092160981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:54:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:55 compute-1 ceph-mon[81715]: pgmap v795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:54:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:58.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:02.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:02.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:03 compute-1 ceph-mon[81715]: pgmap v796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-1 ceph-mon[81715]: pgmap v797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-1 ceph-mon[81715]: pgmap v798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-1 podman[218829]: 2026-01-22 13:55:03.83936425 +0000 UTC m=+12.421506675 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:03 compute-1 podman[218969]: 2026-01-22 13:55:03.992076122 +0000 UTC m=+0.050711253 container create 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Jan 22 13:55:03 compute-1 podman[218969]: 2026-01-22 13:55:03.962581772 +0000 UTC m=+0.021216923 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:03 compute-1 python3[218816]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 13:55:04 compute-1 sudo[218814]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:04.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:04 compute-1 sudo[219158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufnkvxtnzclqezgupemlxhukjklwgmja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090104.330334-3242-207231073496496/AnsiballZ_stat.py'
Jan 22 13:55:04 compute-1 sudo[219158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-1 ceph-mon[81715]: pgmap v799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-1 ceph-mon[81715]: pgmap v800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:04 compute-1 python3.9[219160]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:55:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:55:04 compute-1 sudo[219158]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:05 compute-1 sudo[219312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzfingbsgqbrldsugrhkkdjlbepxsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090105.6618261-3278-265627481022479/AnsiballZ_container_config_data.py'
Jan 22 13:55:05 compute-1 sudo[219312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:06 compute-1 python3.9[219314]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 13:55:06 compute-1 sudo[219312]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:06.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:06 compute-1 ceph-mon[81715]: pgmap v801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:06 compute-1 sudo[219464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmvajyqqumlffnilofecwddluorkpsuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090106.7192497-3311-184240002055297/AnsiballZ_container_config_hash.py'
Jan 22 13:55:06 compute-1 sudo[219464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:07 compute-1 python3.9[219466]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:55:07 compute-1 sudo[219464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:08 compute-1 sudo[219637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iladanntemrgahzgzmkfwhgjhagkywmd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769090107.8220396-3341-142748677163819/AnsiballZ_edpm_container_manage.py'
Jan 22 13:55:08 compute-1 sudo[219637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:08 compute-1 podman[219570]: 2026-01-22 13:55:08.140889874 +0000 UTC m=+0.128142388 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 13:55:08 compute-1 python3[219640]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:55:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:08 compute-1 podman[219681]: 2026-01-22 13:55:08.547553177 +0000 UTC m=+0.026283683 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:55:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:55:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:08 compute-1 podman[219681]: 2026-01-22 13:55:08.916888986 +0000 UTC m=+0.395619462 container create 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 13:55:08 compute-1 python3[219640]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 22 13:55:09 compute-1 sudo[219637]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:09 compute-1 sudo[219868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrfurnuamstwghmyvdibbwjggixoqhek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090109.2593637-3365-160044071960360/AnsiballZ_stat.py'
Jan 22 13:55:09 compute-1 sudo[219868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:09 compute-1 python3.9[219870]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:09 compute-1 sudo[219868]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:09 compute-1 ceph-mon[81715]: pgmap v802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:10 compute-1 sudo[220022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cykrqcthnenfuvimxfpldrbvtjzvzhyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.3839135-3392-150714116257472/AnsiballZ_file.py'
Jan 22 13:55:10 compute-1 sudo[220022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:10 compute-1 python3.9[220024]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:55:10 compute-1 sudo[220022]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:11 compute-1 sudo[220173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidiphdqtipckguyrrtkvnjfhusligye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.926452-3392-104437649725952/AnsiballZ_copy.py'
Jan 22 13:55:11 compute-1 sudo[220173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:11 compute-1 python3.9[220175]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769090110.926452-3392-104437649725952/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:55:11 compute-1 sudo[220173]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:11 compute-1 sudo[220249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibpxdqqfamzwmxoxxafhevounktyipc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.926452-3392-104437649725952/AnsiballZ_systemd.py'
Jan 22 13:55:11 compute-1 sudo[220249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:11 compute-1 ceph-mon[81715]: pgmap v803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:12 compute-1 python3.9[220251]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:55:12 compute-1 systemd[1]: Reloading.
Jan 22 13:55:12 compute-1 systemd-sysv-generator[220281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:55:12 compute-1 systemd-rc-local-generator[220278]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:55:12 compute-1 sudo[220249]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:12 compute-1 sudo[220359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjgugaapnhjkbdsxqxlccvxdtludsrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.926452-3392-104437649725952/AnsiballZ_systemd.py'
Jan 22 13:55:12 compute-1 sudo[220359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:13 compute-1 python3.9[220361]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:55:13 compute-1 systemd[1]: Reloading.
Jan 22 13:55:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:13 compute-1 ceph-mon[81715]: pgmap v804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:13 compute-1 systemd-rc-local-generator[220389]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:55:13 compute-1 systemd-sysv-generator[220392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:55:13 compute-1 systemd[1]: Starting nova_compute container...
Jan 22 13:55:13 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:55:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-1 podman[220400]: 2026-01-22 13:55:13.544706178 +0000 UTC m=+0.099369398 container init 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 13:55:13 compute-1 podman[220400]: 2026-01-22 13:55:13.551528946 +0000 UTC m=+0.106192156 container start 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 13:55:13 compute-1 podman[220400]: nova_compute
Jan 22 13:55:13 compute-1 nova_compute[220416]: + sudo -E kolla_set_configs
Jan 22 13:55:13 compute-1 systemd[1]: Started nova_compute container.
Jan 22 13:55:13 compute-1 sudo[220359]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Validating config file
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying service configuration files
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Deleting /etc/ceph
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Creating directory /etc/ceph
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Writing out command to execute
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:13 compute-1 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:13 compute-1 nova_compute[220416]: ++ cat /run_command
Jan 22 13:55:13 compute-1 nova_compute[220416]: + CMD=nova-compute
Jan 22 13:55:13 compute-1 nova_compute[220416]: + ARGS=
Jan 22 13:55:13 compute-1 nova_compute[220416]: + sudo kolla_copy_cacerts
Jan 22 13:55:13 compute-1 nova_compute[220416]: + [[ ! -n '' ]]
Jan 22 13:55:13 compute-1 nova_compute[220416]: + . kolla_extend_start
Jan 22 13:55:13 compute-1 nova_compute[220416]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 13:55:13 compute-1 nova_compute[220416]: Running command: 'nova-compute'
Jan 22 13:55:13 compute-1 nova_compute[220416]: + umask 0022
Jan 22 13:55:13 compute-1 nova_compute[220416]: + exec nova-compute
Jan 22 13:55:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:14 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:14.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:15 compute-1 ceph-mon[81715]: pgmap v805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:15 compute-1 python3.9[220578]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.145 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 22 13:55:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:16 compute-1 python3.9[220730]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.324 220420 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.342 220420 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.343 220420 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 22 13:55:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:16 compute-1 nova_compute[220416]: 2026-01-22 13:55:16.997 220420 INFO nova.virt.driver [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.191 220420 INFO nova.compute.provider_config [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.207 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.208 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.208 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 python3.9[220882]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 ceph-mon[81715]: pgmap v806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 WARNING oslo_config.cfg [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 13:55:17 compute-1 nova_compute[220416]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 13:55:17 compute-1 nova_compute[220416]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 13:55:17 compute-1 nova_compute[220416]: and ``live_migration_inbound_addr`` respectively.
Jan 22 13:55:17 compute-1 nova_compute[220416]: ).  Its value may be silently ignored in the future.
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.358 220420 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.373 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 22 13:55:17 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 13:55:17 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.451 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb979b03460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.453 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb979b03460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.455 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Connection event '1' reason 'None'
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.496 220420 WARNING nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 22 13:55:17 compute-1 nova_compute[220416]: 2026-01-22 13:55:17.496 220420 DEBUG nova.virt.libvirt.volume.mount [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 22 13:55:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.275 220420 INFO nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]: 
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <host>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <uuid>2198fae5-1aa3-4940-83f6-677ed40734bb</uuid>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <arch>x86_64</arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <microcode version='16777317'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <signature family='23' model='49' stepping='0'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='x2apic'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='tsc-deadline'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='osxsave'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='hypervisor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='tsc_adjust'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='spec-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='stibp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='arch-capabilities'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='cmp_legacy'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='topoext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='virt-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='lbrv'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='tsc-scale'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='vmcb-clean'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='pause-filter'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='pfthreshold'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='svme-addr-chk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='rdctl-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='skip-l1dfl-vmentry'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='mds-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature name='pschange-mc-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <pages unit='KiB' size='4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <pages unit='KiB' size='2048'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <pages unit='KiB' size='1048576'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <power_management>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <suspend_mem/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </power_management>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <iommu support='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <migration_features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <live/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <uri_transports>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <uri_transport>tcp</uri_transport>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <uri_transport>rdma</uri_transport>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </uri_transports>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </migration_features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <topology>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <cells num='1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <cell id='0'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <memory unit='KiB'>7864312</memory>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <pages unit='KiB' size='2048'>0</pages>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <distances>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <sibling id='0' value='10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           </distances>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           <cpus num='8'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:           </cpus>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         </cell>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </cells>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </topology>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <cache>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </cache>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <secmodel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model>selinux</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <doi>0</doi>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </secmodel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <secmodel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model>dac</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <doi>0</doi>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </secmodel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </host>
Jan 22 13:55:18 compute-1 nova_compute[220416]: 
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <guest>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <os_type>hvm</os_type>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <arch name='i686'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <wordsize>32</wordsize>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <domain type='qemu'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <domain type='kvm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <pae/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <nonpae/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <apic default='on' toggle='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <cpuselection/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <deviceboot/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <externalSnapshot/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </guest>
Jan 22 13:55:18 compute-1 nova_compute[220416]: 
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <guest>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <os_type>hvm</os_type>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <arch name='x86_64'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <wordsize>64</wordsize>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <domain type='qemu'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <domain type='kvm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <apic default='on' toggle='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <cpuselection/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <deviceboot/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <externalSnapshot/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </guest>
Jan 22 13:55:18 compute-1 nova_compute[220416]: 
Jan 22 13:55:18 compute-1 nova_compute[220416]: </capabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]: 
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.283 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.303 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 13:55:18 compute-1 nova_compute[220416]: <domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <arch>i686</arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <vcpu max='4096'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <os supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <loader supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>rom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pflash</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='readonly'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>yes</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='secure'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </loader>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </os>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 sudo[221094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazfmllzdsiydoddvpwpigimcibvtele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090117.7343678-3572-63883600309804/AnsiballZ_podman_container.py'
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 sudo[221094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>anonymous</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>memfd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </memoryBacking>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <disk supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>disk</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cdrom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>floppy</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>lun</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>fdc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>sata</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </disk>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vnc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </graphics>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <video supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='modelType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vga</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cirrus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>none</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>bochs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ramfb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </video>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='mode'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>subsystem</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>mandatory</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>requisite</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>optional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pci</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hostdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <rng supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>random</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </rng>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='driverType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>path</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>handle</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </filesystem>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emulator</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>external</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>2.0</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </tpm>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </redirdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <channel supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </channel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </crypto>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <interface supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>passt</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </interface>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <panic supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>isa</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>hyperv</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </panic>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <console supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>null</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dev</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pipe</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stdio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>udp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tcp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </console>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <gic supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sev supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='features'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>relaxed</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vapic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vpindex</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>runtime</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>synic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stimer</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reset</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>frequencies</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ipi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>avic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hyperv>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]: </domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.313 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 13:55:18 compute-1 nova_compute[220416]: <domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <arch>i686</arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <vcpu max='240'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <os supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <loader supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>rom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pflash</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='readonly'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>yes</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='secure'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </loader>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </os>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>anonymous</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>memfd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </memoryBacking>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <disk supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>disk</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cdrom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>floppy</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>lun</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ide</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>fdc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>sata</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </disk>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vnc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </graphics>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <video supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='modelType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vga</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cirrus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>none</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>bochs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ramfb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </video>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='mode'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>subsystem</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>mandatory</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>requisite</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>optional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pci</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hostdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <rng supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>random</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </rng>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='driverType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>path</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>handle</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </filesystem>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emulator</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>external</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>2.0</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </tpm>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </redirdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <channel supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </channel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </crypto>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <interface supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>passt</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </interface>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <panic supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>isa</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>hyperv</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </panic>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <console supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>null</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dev</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pipe</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stdio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>udp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tcp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </console>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <gic supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sev supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='features'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>relaxed</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vapic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vpindex</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>runtime</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>synic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stimer</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reset</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>frequencies</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ipi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>avic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hyperv>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]: </domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.369 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 13:55:18 compute-1 nova_compute[220416]: <domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <arch>x86_64</arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <vcpu max='4096'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <os supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='firmware'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>efi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <loader supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>rom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pflash</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='readonly'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>yes</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='secure'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>yes</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </loader>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </os>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>anonymous</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>memfd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </memoryBacking>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <disk supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>disk</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cdrom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>floppy</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>lun</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>fdc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>sata</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </disk>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vnc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </graphics>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <video supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='modelType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vga</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cirrus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>none</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>bochs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ramfb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </video>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='mode'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>subsystem</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>mandatory</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>requisite</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>optional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pci</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hostdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <rng supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>random</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </rng>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='driverType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>path</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>handle</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </filesystem>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emulator</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>external</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>2.0</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </tpm>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </redirdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <channel supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </channel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </crypto>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <interface supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>passt</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </interface>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <panic supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>isa</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>hyperv</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </panic>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <console supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>null</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dev</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pipe</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stdio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>udp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tcp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </console>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <gic supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sev supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='features'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>relaxed</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vapic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vpindex</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>runtime</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>synic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stimer</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reset</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>frequencies</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ipi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>avic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hyperv>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]: </domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.453 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 13:55:18 compute-1 nova_compute[220416]: <domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <arch>x86_64</arch>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <vcpu max='240'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <os supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <loader supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>rom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pflash</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='readonly'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>yes</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='secure'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>no</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </loader>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </os>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>on</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>off</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xop'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 python3.9[221098]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='la57'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='lam'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='hle'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='pku'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='erms'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='ss'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </blockers>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </mode>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>anonymous</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <value>memfd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </memoryBacking>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <disk supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>disk</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cdrom</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>floppy</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>lun</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ide</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>fdc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>sata</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </disk>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vnc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </graphics>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <video supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='modelType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vga</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>cirrus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>none</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>bochs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ramfb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </video>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='mode'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>subsystem</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>mandatory</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>requisite</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>optional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pci</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>scsi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hostdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <rng supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>random</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>egd</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </rng>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='driverType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>path</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>handle</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </filesystem>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emulator</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>external</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>2.0</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </tpm>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='bus'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>usb</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </redirdev>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <channel supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </channel>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>builtin</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </crypto>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <interface supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='backendType'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>default</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>passt</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </interface>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <panic supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='model'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>isa</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>hyperv</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </panic>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <console supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='type'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>null</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vc</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pty</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dev</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>file</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>pipe</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stdio</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>udp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tcp</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>unix</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>dbus</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </console>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </devices>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <features>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <gic supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sev supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <enum name='features'>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>relaxed</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vapic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vpindex</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>runtime</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>synic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>stimer</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reset</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>frequencies</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>ipi</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>avic</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </enum>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       <defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-1 nova_compute[220416]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-1 nova_compute[220416]:       </defaults>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     </hyperv>
Jan 22 13:55:18 compute-1 nova_compute[220416]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-1 nova_compute[220416]:   </features>
Jan 22 13:55:18 compute-1 nova_compute[220416]: </domainCapabilities>
Jan 22 13:55:18 compute-1 nova_compute[220416]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.531 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.531 220420 INFO nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Secure Boot support detected
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.534 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.534 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.545 220420 DEBUG nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 13:55:18 compute-1 nova_compute[220416]:   <model>Nehalem</model>
Jan 22 13:55:18 compute-1 nova_compute[220416]: </cpu>
Jan 22 13:55:18 compute-1 nova_compute[220416]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.548 220420 DEBUG nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.632 220420 INFO nova.virt.node [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.659 220420 WARNING nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Compute nodes ['9903a6f8-fb0a-4d8e-b632-398eaedd969e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 22 13:55:18 compute-1 sudo[221094]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.703 220420 INFO nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.752 220420 WARNING nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.754 220420 DEBUG nova.compute.resource_tracker [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:55:18 compute-1 nova_compute[220416]: 2026-01-22 13:55:18.754 220420 DEBUG oslo_concurrency.processutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:19 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/532836915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.253 220420 DEBUG oslo_concurrency.processutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:19 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:55:19 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 13:55:19 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 22 13:55:19 compute-1 sudo[221313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrklkaslmtamcrozknnpbrolkausrxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090119.0484977-3596-111582892949934/AnsiballZ_systemd.py'
Jan 22 13:55:19 compute-1 sudo[221313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.609 220420 WARNING nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG nova.compute.resource_tracker [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5299MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:19 compute-1 python3.9[221315]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:55:19 compute-1 systemd[1]: Stopping nova_compute container...
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.795 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:19 compute-1 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:20 compute-1 virtqemud[220928]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 13:55:20 compute-1 virtqemud[220928]: hostname: compute-1
Jan 22 13:55:20 compute-1 virtqemud[220928]: End of file while reading data: Input/output error
Jan 22 13:55:20 compute-1 systemd[1]: libpod-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a.scope: Deactivated successfully.
Jan 22 13:55:20 compute-1 systemd[1]: libpod-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a.scope: Consumed 4.199s CPU time.
Jan 22 13:55:20 compute-1 podman[221321]: 2026-01-22 13:55:20.339414361 +0000 UTC m=+0.588968318 container died 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 22 13:55:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:20.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a-userdata-shm.mount: Deactivated successfully.
Jan 22 13:55:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac-merged.mount: Deactivated successfully.
Jan 22 13:55:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:23 compute-1 ceph-mon[81715]: pgmap v807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:23 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/729966866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:23 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/532836915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:23 compute-1 podman[221321]: 2026-01-22 13:55:23.074506178 +0000 UTC m=+3.324060125 container cleanup 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 13:55:23 compute-1 podman[221321]: nova_compute
Jan 22 13:55:23 compute-1 podman[221355]: nova_compute
Jan 22 13:55:23 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 13:55:23 compute-1 systemd[1]: Stopped nova_compute container.
Jan 22 13:55:23 compute-1 systemd[1]: Starting nova_compute container...
Jan 22 13:55:23 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3079402314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:23 compute-1 ceph-mon[81715]: pgmap v808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:23 compute-1 podman[221368]: 2026-01-22 13:55:23.312930392 +0000 UTC m=+0.144641350 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 13:55:23 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:55:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:23 compute-1 podman[221369]: 2026-01-22 13:55:23.442773337 +0000 UTC m=+0.267504664 container init 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 22 13:55:23 compute-1 podman[221369]: 2026-01-22 13:55:23.450764407 +0000 UTC m=+0.275495704 container start 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:55:23 compute-1 podman[221369]: nova_compute
Jan 22 13:55:23 compute-1 nova_compute[221400]: + sudo -E kolla_set_configs
Jan 22 13:55:23 compute-1 systemd[1]: Started nova_compute container.
Jan 22 13:55:23 compute-1 sudo[221313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Validating config file
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying service configuration files
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /etc/ceph
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Creating directory /etc/ceph
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Writing out command to execute
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:23 compute-1 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:23 compute-1 nova_compute[221400]: ++ cat /run_command
Jan 22 13:55:23 compute-1 nova_compute[221400]: + CMD=nova-compute
Jan 22 13:55:23 compute-1 nova_compute[221400]: + ARGS=
Jan 22 13:55:23 compute-1 nova_compute[221400]: + sudo kolla_copy_cacerts
Jan 22 13:55:23 compute-1 nova_compute[221400]: + [[ ! -n '' ]]
Jan 22 13:55:23 compute-1 nova_compute[221400]: + . kolla_extend_start
Jan 22 13:55:23 compute-1 nova_compute[221400]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 13:55:23 compute-1 nova_compute[221400]: Running command: 'nova-compute'
Jan 22 13:55:23 compute-1 nova_compute[221400]: + umask 0022
Jan 22 13:55:23 compute-1 nova_compute[221400]: + exec nova-compute
Jan 22 13:55:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-1 sudo[221565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwzoyvdfmpmympsptrtkbaqkwpssmwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090123.802127-3624-174955587002400/AnsiballZ_podman_container.py'
Jan 22 13:55:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-1 ceph-mon[81715]: pgmap v809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:24 compute-1 sudo[221565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:24 compute-1 python3.9[221567]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 13:55:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:24 compute-1 systemd[1]: Started libpod-conmon-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope.
Jan 22 13:55:24 compute-1 systemd[1]: Started libcrun container.
Jan 22 13:55:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:24 compute-1 podman[221592]: 2026-01-22 13:55:24.598884991 +0000 UTC m=+0.126522654 container init 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 22 13:55:24 compute-1 podman[221592]: 2026-01-22 13:55:24.61121054 +0000 UTC m=+0.138848183 container start 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 22 13:55:24 compute-1 python3.9[221567]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 13:55:24 compute-1 nova_compute_init[221613]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 13:55:24 compute-1 systemd[1]: libpod-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope: Deactivated successfully.
Jan 22 13:55:24 compute-1 podman[221627]: 2026-01-22 13:55:24.711117303 +0000 UTC m=+0.024239117 container died 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 13:55:24 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739-userdata-shm.mount: Deactivated successfully.
Jan 22 13:55:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506-merged.mount: Deactivated successfully.
Jan 22 13:55:24 compute-1 podman[221627]: 2026-01-22 13:55:24.752036866 +0000 UTC m=+0.065158630 container cleanup 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 13:55:24 compute-1 systemd[1]: libpod-conmon-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope: Deactivated successfully.
Jan 22 13:55:24 compute-1 sudo[221565]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:25 compute-1 ceph-mon[81715]: pgmap v810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.738 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.738 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.739 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.739 221408 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 22 13:55:25 compute-1 sshd-session[197460]: Connection closed by 192.168.122.30 port 42638
Jan 22 13:55:25 compute-1 sshd-session[197457]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:55:25 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 13:55:25 compute-1 systemd[1]: session-49.scope: Consumed 2min 3.474s CPU time.
Jan 22 13:55:25 compute-1 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Jan 22 13:55:25 compute-1 systemd-logind[787]: Removed session 49.
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.913 221408 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.938 221408 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:25 compute-1 nova_compute[221400]: 2026-01-22 13:55:25.939 221408 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 22 13:55:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.557 221408 INFO nova.virt.driver [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.677 221408 INFO nova.compute.provider_config [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.695 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.777 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.777 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 WARNING oslo_config.cfg [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 13:55:26 compute-1 nova_compute[221400]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 13:55:26 compute-1 nova_compute[221400]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 13:55:26 compute-1 nova_compute[221400]: and ``live_migration_inbound_addr`` respectively.
Jan 22 13:55:26 compute-1 nova_compute[221400]: ).  Its value may be silently ignored in the future.
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.867 221408 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.883 221408 INFO nova.virt.node [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.884 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.897 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f120a650b50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.899 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f120a650b50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.900 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Connection event '1' reason 'None'
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.907 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 13:55:26 compute-1 nova_compute[221400]: 
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <host>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <uuid>2198fae5-1aa3-4940-83f6-677ed40734bb</uuid>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <cpu>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <arch>x86_64</arch>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model>EPYC-Rome-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <vendor>AMD</vendor>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <microcode version='16777317'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <signature family='23' model='49' stepping='0'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='x2apic'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='tsc-deadline'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='osxsave'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='hypervisor'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='tsc_adjust'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='spec-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='stibp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='arch-capabilities'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='cmp_legacy'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='topoext'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='virt-ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='lbrv'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='tsc-scale'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='vmcb-clean'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='pause-filter'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='pfthreshold'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='svme-addr-chk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='rdctl-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='skip-l1dfl-vmentry'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='mds-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature name='pschange-mc-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <pages unit='KiB' size='4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <pages unit='KiB' size='2048'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <pages unit='KiB' size='1048576'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </cpu>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <power_management>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <suspend_mem/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </power_management>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <iommu support='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <migration_features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <live/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <uri_transports>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <uri_transport>tcp</uri_transport>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <uri_transport>rdma</uri_transport>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </uri_transports>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </migration_features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <topology>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <cells num='1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <cell id='0'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <memory unit='KiB'>7864312</memory>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <pages unit='KiB' size='2048'>0</pages>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <distances>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <sibling id='0' value='10'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           </distances>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           <cpus num='8'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:           </cpus>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         </cell>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </cells>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </topology>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <cache>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </cache>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <secmodel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model>selinux</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <doi>0</doi>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </secmodel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <secmodel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model>dac</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <doi>0</doi>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </secmodel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </host>
Jan 22 13:55:26 compute-1 nova_compute[221400]: 
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <guest>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <os_type>hvm</os_type>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <arch name='i686'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <wordsize>32</wordsize>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <domain type='qemu'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <domain type='kvm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </arch>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <pae/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <nonpae/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <apic default='on' toggle='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <cpuselection/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <deviceboot/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <externalSnapshot/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </guest>
Jan 22 13:55:26 compute-1 nova_compute[221400]: 
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <guest>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <os_type>hvm</os_type>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <arch name='x86_64'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <wordsize>64</wordsize>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <domain type='qemu'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <domain type='kvm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </arch>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <apic default='on' toggle='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <cpuselection/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <deviceboot/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <externalSnapshot/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </guest>
Jan 22 13:55:26 compute-1 nova_compute[221400]: 
Jan 22 13:55:26 compute-1 nova_compute[221400]: </capabilities>
Jan 22 13:55:26 compute-1 nova_compute[221400]: 
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.913 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.917 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 13:55:26 compute-1 nova_compute[221400]: <domainCapabilities>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <domain>kvm</domain>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <arch>i686</arch>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <vcpu max='240'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <iothreads supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <os supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <enum name='firmware'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <loader supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>rom</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pflash</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='readonly'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>yes</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='secure'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </loader>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </os>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <cpu>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='maximumMigratable'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <vendor>AMD</vendor>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='succor'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='custom' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-v5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Haswell-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='IvyBridge'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='KnightsMill'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SierraForest'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Snowridge'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='athlon'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='athlon-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='core2duo'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='core2duo-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='coreduo'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='coreduo-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='n270'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='n270-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='phenom'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='phenom-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </cpu>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <memoryBacking supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <enum name='sourceType'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <value>file</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <value>anonymous</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <value>memfd</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </memoryBacking>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <devices>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <disk supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='diskDevice'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>disk</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>cdrom</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>floppy</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>lun</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>ide</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>fdc</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>sata</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </disk>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <graphics supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vnc</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>egl-headless</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </graphics>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <video supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='modelType'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vga</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>cirrus</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>none</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>bochs</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>ramfb</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </video>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <hostdev supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='mode'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>subsystem</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='startupPolicy'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>mandatory</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>requisite</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>optional</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='subsysType'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pci</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='capsType'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='pciBackend'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </hostdev>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <rng supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>random</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>egd</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </rng>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <filesystem supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='driverType'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>path</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>handle</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>virtiofs</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </filesystem>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <tpm supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>tpm-tis</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>tpm-crb</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>emulator</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>external</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='backendVersion'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>2.0</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </tpm>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <redirdev supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </redirdev>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <channel supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </channel>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <crypto supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='model'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>qemu</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </crypto>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <interface supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='backendType'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>passt</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </interface>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <panic supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>isa</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>hyperv</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </panic>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <console supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>null</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vc</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>dev</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>file</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pipe</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>stdio</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>udp</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>tcp</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>qemu-vdagent</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </console>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </devices>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <features>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <gic supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <genid supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <backup supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <async-teardown supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <s390-pv supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <ps2 supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <tdx supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <sev supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <sgx supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <hyperv supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='features'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>relaxed</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vapic</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>spinlocks</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vpindex</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>runtime</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>synic</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>stimer</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>reset</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>vendor_id</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>frequencies</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>reenlightenment</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>tlbflush</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>ipi</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>avic</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>emsr_bitmap</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>xmm_input</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <defaults>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </defaults>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </hyperv>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <launchSecurity supported='no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </features>
Jan 22 13:55:26 compute-1 nova_compute[221400]: </domainCapabilities>
Jan 22 13:55:26 compute-1 nova_compute[221400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:26 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.924 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 13:55:26 compute-1 nova_compute[221400]: <domainCapabilities>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <domain>kvm</domain>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <arch>i686</arch>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <vcpu max='4096'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <iothreads supported='yes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <os supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <enum name='firmware'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <loader supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>rom</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>pflash</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='readonly'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>yes</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='secure'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </loader>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   </os>
Jan 22 13:55:26 compute-1 nova_compute[221400]:   <cpu>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <enum name='maximumMigratable'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <vendor>AMD</vendor>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='succor'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:26 compute-1 nova_compute[221400]:     <mode name='custom' supported='yes'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Denverton-v3'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:26 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:26 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <memoryBacking supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <enum name='sourceType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>anonymous</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>memfd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </memoryBacking>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <disk supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='diskDevice'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>disk</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cdrom</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>floppy</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>lun</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>fdc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>sata</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </disk>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <graphics supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vnc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egl-headless</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </graphics>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <video supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='modelType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vga</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cirrus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>none</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>bochs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ramfb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </video>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hostdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='mode'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>subsystem</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='startupPolicy'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>mandatory</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>requisite</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>optional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='subsysType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pci</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='capsType'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='pciBackend'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hostdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <rng supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>random</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </rng>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <filesystem supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='driverType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>path</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>handle</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtiofs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </filesystem>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tpm supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-tis</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-crb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emulator</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>external</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendVersion'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>2.0</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </tpm>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <redirdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </redirdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <channel supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </channel>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <crypto supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </crypto>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <interface supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>passt</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </interface>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <panic supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>isa</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>hyperv</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </panic>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <console supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>null</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dev</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pipe</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stdio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>udp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tcp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu-vdagent</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </console>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <features>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <gic supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <genid supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backup supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <async-teardown supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <s390-pv supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <ps2 supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tdx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sev supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sgx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hyperv supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='features'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>relaxed</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vapic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>spinlocks</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vpindex</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>runtime</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>synic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stimer</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reset</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vendor_id</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>frequencies</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reenlightenment</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tlbflush</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ipi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>avic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emsr_bitmap</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>xmm_input</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hyperv>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <launchSecurity supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </features>
Jan 22 13:55:27 compute-1 nova_compute[221400]: </domainCapabilities>
Jan 22 13:55:27 compute-1 nova_compute[221400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.985 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.988 221408 DEBUG nova.virt.libvirt.volume.mount [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:26.992 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 13:55:27 compute-1 nova_compute[221400]: <domainCapabilities>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <domain>kvm</domain>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <arch>x86_64</arch>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <vcpu max='240'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <iothreads supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <os supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <enum name='firmware'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <loader supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>rom</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pflash</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='readonly'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>yes</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='secure'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </loader>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </os>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='maximumMigratable'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <vendor>AMD</vendor>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='succor'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='custom' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <memoryBacking supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <enum name='sourceType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>anonymous</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>memfd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </memoryBacking>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <disk supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='diskDevice'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>disk</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cdrom</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>floppy</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>lun</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ide</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>fdc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>sata</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </disk>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <graphics supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vnc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egl-headless</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </graphics>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <video supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='modelType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vga</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cirrus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>none</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>bochs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ramfb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </video>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hostdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='mode'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>subsystem</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='startupPolicy'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>mandatory</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>requisite</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>optional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='subsysType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pci</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='capsType'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='pciBackend'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hostdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <rng supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>random</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </rng>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <filesystem supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='driverType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>path</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>handle</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtiofs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </filesystem>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tpm supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-tis</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-crb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emulator</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>external</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendVersion'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>2.0</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </tpm>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <redirdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </redirdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <channel supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </channel>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <crypto supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </crypto>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <interface supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>passt</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </interface>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <panic supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>isa</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>hyperv</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </panic>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <console supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>null</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dev</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pipe</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stdio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>udp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tcp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu-vdagent</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </console>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <features>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <gic supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <genid supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backup supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <async-teardown supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <s390-pv supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <ps2 supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tdx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sev supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sgx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hyperv supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='features'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>relaxed</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vapic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>spinlocks</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vpindex</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>runtime</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>synic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stimer</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reset</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vendor_id</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>frequencies</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reenlightenment</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tlbflush</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ipi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>avic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emsr_bitmap</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>xmm_input</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hyperv>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <launchSecurity supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </features>
Jan 22 13:55:27 compute-1 nova_compute[221400]: </domainCapabilities>
Jan 22 13:55:27 compute-1 nova_compute[221400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.074 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 13:55:27 compute-1 nova_compute[221400]: <domainCapabilities>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <domain>kvm</domain>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <arch>x86_64</arch>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <vcpu max='4096'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <iothreads supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <os supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <enum name='firmware'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>efi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <loader supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>rom</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pflash</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='readonly'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>yes</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='secure'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>yes</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>no</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </loader>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </os>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='maximumMigratable'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>on</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>off</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <vendor>AMD</vendor>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='succor'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <mode name='custom' supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ddpd-u'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sha512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm3'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sm4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Denverton-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amd-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='auto-ibrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='perfmon-v2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbpb'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='stibp-always-on'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='EPYC-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-128'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-256'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx10-512'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='prefetchiti'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Haswell-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512er'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512pf'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fma4'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tbm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xop'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='amx-tile'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-bf16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-fp16'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bitalg'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrc'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fzrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='la57'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='taa-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 ceph-mon[81715]: pgmap v811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2972063964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ifma'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cmpccxadd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fbsdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='fsrs'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ibrs-all'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='intel-psfd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='lam'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mcdt-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pbrsb-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='psdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='serialize'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vaes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='hle'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='rtm'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512bw'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512cd'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512dq'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512f'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='avx512vl'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='invpcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pcid'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='pku'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='mpx'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='core-capability'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='split-lock-detect'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='cldemote'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='erms'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='gfni'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdir64b'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='movdiri'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='xsaves'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='athlon-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='core2duo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='coreduo-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='n270-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='ss'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <blockers model='phenom-v1'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnow'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <feature name='3dnowext'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </blockers>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </mode>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <memoryBacking supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <enum name='sourceType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>anonymous</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <value>memfd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </memoryBacking>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <disk supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='diskDevice'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>disk</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cdrom</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>floppy</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>lun</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>fdc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>sata</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </disk>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <graphics supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vnc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egl-headless</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </graphics>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <video supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='modelType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vga</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>cirrus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>none</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>bochs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ramfb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </video>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hostdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='mode'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>subsystem</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='startupPolicy'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>mandatory</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>requisite</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>optional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='subsysType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pci</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>scsi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='capsType'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='pciBackend'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hostdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <rng supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtio-non-transitional</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>random</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>egd</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </rng>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <filesystem supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='driverType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>path</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>handle</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>virtiofs</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </filesystem>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tpm supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-tis</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tpm-crb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emulator</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>external</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendVersion'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>2.0</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </tpm>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <redirdev supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='bus'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>usb</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </redirdev>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <channel supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </channel>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <crypto supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendModel'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>builtin</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </crypto>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <interface supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='backendType'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>default</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>passt</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </interface>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <panic supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='model'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>isa</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>hyperv</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </panic>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <console supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='type'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>null</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vc</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pty</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dev</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>file</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>pipe</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stdio</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>udp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tcp</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>unix</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>qemu-vdagent</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>dbus</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </console>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </devices>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <features>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <gic supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <genid supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <backup supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <async-teardown supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <s390-pv supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <ps2 supported='yes'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <tdx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sev supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <sgx supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <hyperv supported='yes'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <enum name='features'>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>relaxed</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vapic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>spinlocks</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vpindex</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>runtime</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>synic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>stimer</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reset</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>vendor_id</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>frequencies</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>reenlightenment</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>tlbflush</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>ipi</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>avic</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>emsr_bitmap</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <value>xmm_input</value>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </enum>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       <defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:27 compute-1 nova_compute[221400]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:27 compute-1 nova_compute[221400]:       </defaults>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     </hyperv>
Jan 22 13:55:27 compute-1 nova_compute[221400]:     <launchSecurity supported='no'/>
Jan 22 13:55:27 compute-1 nova_compute[221400]:   </features>
Jan 22 13:55:27 compute-1 nova_compute[221400]: </domainCapabilities>
Jan 22 13:55:27 compute-1 nova_compute[221400]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.177 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Secure Boot support detected
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.180 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.180 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.191 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 13:55:27 compute-1 nova_compute[221400]:   <model>Nehalem</model>
Jan 22 13:55:27 compute-1 nova_compute[221400]: </cpu>
Jan 22 13:55:27 compute-1 nova_compute[221400]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.193 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.233 221408 INFO nova.virt.node [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.251 221408 WARNING nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute nodes ['9903a6f8-fb0a-4d8e-b632-398eaedd969e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.300 221408 INFO nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 WARNING nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.343 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.343 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:27 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4194248427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.820 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.978 221408 WARNING nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5284MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:27 compute-1 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:28 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/788234680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:28 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4194248427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:28 compute-1 nova_compute[221400]: 2026-01-22 13:55:28.926 221408 WARNING nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] No compute node record for compute-1.ctlplane.example.com:9903a6f8-fb0a-4d8e-b632-398eaedd969e: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 9903a6f8-fb0a-4d8e-b632-398eaedd969e could not be found.
Jan 22 13:55:28 compute-1 nova_compute[221400]: 2026-01-22 13:55:28.951 221408 INFO nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 9903a6f8-fb0a-4d8e-b632-398eaedd969e
Jan 22 13:55:29 compute-1 nova_compute[221400]: 2026-01-22 13:55:29.005 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:55:29 compute-1 nova_compute[221400]: 2026-01-22 13:55:29.006 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:55:29 compute-1 nova_compute[221400]: 2026-01-22 13:55:29.367 221408 INFO nova.scheduler.client.report [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] [req-8da87f40-a021-43f4-bf70-abf636307ede] Created resource provider record via placement API for resource provider with UUID 9903a6f8-fb0a-4d8e-b632-398eaedd969e and name compute-1.ctlplane.example.com.
Jan 22 13:55:29 compute-1 nova_compute[221400]: 2026-01-22 13:55:29.631 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:29 compute-1 ceph-mon[81715]: pgmap v812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2684260221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1006289607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:30 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1504024635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.226 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.231 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 22 13:55:30 compute-1 nova_compute[221400]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.231 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] kernel doesn't support AMD SEV
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.232 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating inventory in ProviderTree for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.232 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 13:55:30 compute-1 nova_compute[221400]: 2026-01-22 13:55:30.235 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt baseline CPU <cpu>
Jan 22 13:55:30 compute-1 nova_compute[221400]:   <arch>x86_64</arch>
Jan 22 13:55:30 compute-1 nova_compute[221400]:   <model>Nehalem</model>
Jan 22 13:55:30 compute-1 nova_compute[221400]:   <vendor>AMD</vendor>
Jan 22 13:55:30 compute-1 nova_compute[221400]:   <topology sockets="8" cores="1" threads="1"/>
Jan 22 13:55:30 compute-1 nova_compute[221400]: </cpu>
Jan 22 13:55:30 compute-1 nova_compute[221400]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 22 13:55:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:30.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:31 compute-1 ceph-mon[81715]: pgmap v813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1504024635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:55:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3652 writes, 21K keys, 3652 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 3652 writes, 3652 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1644 writes, 8820 keys, 1644 commit groups, 1.0 writes per commit group, ingest: 15.75 MB, 0.03 MB/s
                                           Interval WAL: 1644 writes, 1644 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     39.5      0.60              0.07        11    0.054       0      0       0.0       0.0
                                             L6      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    109.9     92.5      0.90              0.22        10    0.090     53K   5362       0.0       0.0
                                            Sum      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     66.0     71.3      1.50              0.29        21    0.071     53K   5362       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.5     59.6     59.6      0.98              0.16        12    0.081     35K   3554       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    109.9     92.5      0.90              0.22        10    0.090     53K   5362       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     39.6      0.60              0.07        10    0.060       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 7.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000102 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(355,6.60 MB,2.1698%) FilterBlock(21,158.98 KB,0.0510718%) IndexBlock(21,261.39 KB,0.0839685%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:55:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:32.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.859 221408 DEBUG nova.scheduler.client.report [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updated inventory for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.860 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.860 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating inventory in ProviderTree for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.947 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.968 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.969 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:32 compute-1 nova_compute[221400]: 2026-01-22 13:55:32.969 221408 DEBUG nova.service [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 22 13:55:33 compute-1 nova_compute[221400]: 2026-01-22 13:55:33.029 221408 DEBUG nova.service [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 22 13:55:33 compute-1 nova_compute[221400]: 2026-01-22 13:55:33.030 221408 DEBUG nova.servicegroup.drivers.db [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 22 13:55:33 compute-1 sudo[221747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:33 compute-1 sudo[221747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-1 sudo[221747]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-1 sudo[221772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:55:33 compute-1 sudo[221772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-1 sudo[221772]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-1 sudo[221797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:33 compute-1 sudo[221797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-1 sudo[221797]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-1 ceph-mon[81715]: pgmap v814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:33 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:33 compute-1 sudo[221822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:55:33 compute-1 sudo[221822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:33 compute-1 sudo[221822]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:34 compute-1 sudo[221867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:34 compute-1 sudo[221867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:34 compute-1 sudo[221867]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:34 compute-1 sudo[221892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:55:34 compute-1 sudo[221892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:34 compute-1 sudo[221892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:34 compute-1 sudo[221917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:34 compute-1 sudo[221917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:34 compute-1 sudo[221917]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:34 compute-1 sudo[221942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:55:34 compute-1 sudo[221942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:34 compute-1 sudo[221942]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:35 compute-1 ceph-mon[81715]: pgmap v815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:55:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:55:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:55:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:55:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:55:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:37 compute-1 ceph-mon[81715]: pgmap v816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:38.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:38.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:39 compute-1 podman[221999]: 2026-01-22 13:55:39.150914643 +0000 UTC m=+0.141602585 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 13:55:39 compute-1 ceph-mon[81715]: pgmap v817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:40 compute-1 ceph-mon[81715]: pgmap v818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:42 compute-1 sudo[222023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:42 compute-1 sudo[222023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:42 compute-1 sudo[222023]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:42 compute-1 sudo[222048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:55:42 compute-1 sudo[222048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:42 compute-1 sudo[222048]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:42 compute-1 ceph-mon[81715]: pgmap v819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:43 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:44 compute-1 nova_compute[221400]: 2026-01-22 13:55:44.032 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:55:44 compute-1 nova_compute[221400]: 2026-01-22 13:55:44.057 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:55:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:45 compute-1 ceph-mon[81715]: pgmap v820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:47 compute-1 ceph-mon[81715]: pgmap v821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:50 compute-1 ceph-mon[81715]: pgmap v822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:50 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:51 compute-1 ceph-mon[81715]: pgmap v823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:52.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:53 compute-1 ceph-mon[81715]: pgmap v824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:54 compute-1 podman[222073]: 2026-01-22 13:55:54.072978264 +0000 UTC m=+0.057396147 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 13:55:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:55 compute-1 ceph-mon[81715]: pgmap v825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:56.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:57 compute-1 ceph-mon[81715]: pgmap v826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:58.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:55:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:58.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:59 compute-1 ceph-mon[81715]: pgmap v827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:59 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:00 compute-1 ceph-mon[81715]: pgmap v828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:03 compute-1 ceph-mon[81715]: pgmap v829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:06 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:06 compute-1 ceph-mon[81715]: pgmap v830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:06.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-1 ceph-mon[81715]: pgmap v831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:08.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:08 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:08 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:09 compute-1 ceph-mon[81715]: pgmap v832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:09 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:09 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3332621159' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:09 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3332621159' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 13:56:09 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 13:56:09 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:10 compute-1 podman[222091]: 2026-01-22 13:56:10.107847656 +0000 UTC m=+0.094539211 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 13:56:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:10 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:10 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:10 compute-1 ceph-mon[81715]: pgmap v833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:13 compute-1 ceph-mon[81715]: pgmap v834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:14 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1164 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:15 compute-1 ceph-mon[81715]: pgmap v835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:16 compute-1 ceph-mon[81715]: pgmap v836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:19 compute-1 ceph-mon[81715]: pgmap v837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:19 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:21 compute-1 ceph-mon[81715]: pgmap v838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:22.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:23 compute-1 ceph-mon[81715]: pgmap v839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:24.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:24 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:25 compute-1 podman[222118]: 2026-01-22 13:56:25.095915759 +0000 UTC m=+0.082036160 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:56:25 compute-1 ceph-mon[81715]: pgmap v840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:25 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4187439219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:25 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/409233514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2838972096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.952 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.952 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.952 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.953 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.984 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.985 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.985 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.985 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.986 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.986 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.986 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.986 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:56:25 compute-1 nova_compute[221400]: 2026-01-22 13:56:25.986 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.014 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.015 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.015 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.015 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.016 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:56:26 compute-1 rsyslogd[1007]: imjournal from <np0005592158:ceph-mon>: begin to drop messages due to rate-limiting
Jan 22 13:56:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:56:26 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/577808434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.484 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:56:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:26.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.670 221408 WARNING nova.virt.libvirt.driver [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.673 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5341MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.674 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.674 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3407441321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/577808434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.800 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.801 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:56:26 compute-1 nova_compute[221400]: 2026-01-22 13:56:26.842 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:56:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:56:27 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/709703454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:27 compute-1 nova_compute[221400]: 2026-01-22 13:56:27.303 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:56:27 compute-1 nova_compute[221400]: 2026-01-22 13:56:27.310 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:56:27 compute-1 nova_compute[221400]: 2026-01-22 13:56:27.390 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:56:27 compute-1 nova_compute[221400]: 2026-01-22 13:56:27.392 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:56:27 compute-1 nova_compute[221400]: 2026-01-22 13:56:27.392 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:27 compute-1 ceph-mon[81715]: pgmap v841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/709703454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:28.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:29 compute-1 ceph-mon[81715]: pgmap v842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:29 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1179 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:30.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:30 compute-1 ceph-mon[81715]: pgmap v843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:32 compute-1 ceph-mon[81715]: pgmap v844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:34 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:35 compute-1 ceph-mon[81715]: pgmap v845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:36.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:37 compute-1 ceph-mon[81715]: pgmap v846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:38.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:39 compute-1 ceph-mon[81715]: pgmap v847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:40.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:41 compute-1 podman[222181]: 2026-01-22 13:56:41.105116428 +0000 UTC m=+0.094485310 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 13:56:41 compute-1 ceph-mon[81715]: pgmap v848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:42 compute-1 sudo[222207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:42 compute-1 sudo[222207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-1 sudo[222207]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-1 sudo[222232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:56:42 compute-1 sudo[222232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-1 sudo[222232]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-1 sudo[222257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:42 compute-1 sudo[222257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-1 sudo[222257]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:42 compute-1 sudo[222282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:56:42 compute-1 sudo[222282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:42.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:43 compute-1 sudo[222282]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:43 compute-1 ceph-mon[81715]: pgmap v849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 13:56:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 13:56:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:44.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:44 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:44 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:45 compute-1 ceph-mon[81715]: pgmap v850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:46.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:46.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:56:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:56:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:56:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:47 compute-1 ceph-mon[81715]: pgmap v851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:56:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:56:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:49 compute-1 ceph-mon[81715]: pgmap v852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:50 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:50.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:51 compute-1 ceph-mon[81715]: pgmap v853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:52.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:52.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:53 compute-1 ceph-mon[81715]: pgmap v854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:54 compute-1 sudo[222337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:54 compute-1 sudo[222337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:54 compute-1 sudo[222337]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:54 compute-1 sudo[222362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:56:54 compute-1 sudo[222362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:54 compute-1 sudo[222362]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:54.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:54.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:55 compute-1 ceph-mon[81715]: pgmap v855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:56 compute-1 podman[222387]: 2026-01-22 13:56:56.060531186 +0000 UTC m=+0.052342899 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 13:56:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:57 compute-1 ceph-mon[81715]: pgmap v856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:57 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-1 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:56:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:59 compute-1 ceph-mon[81715]: pgmap v857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:59 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:56:59 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:00.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:00 compute-1 ceph-mon[81715]: pgmap v858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Jan 22 13:57:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:57:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:02.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:57:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:03 compute-1 ceph-mon[81715]: pgmap v859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 13:57:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:04 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:05 compute-1 ceph-mon[81715]: pgmap v860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:06.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:08.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:08 compute-1 ceph-mon[81715]: pgmap v861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:10 compute-1 ceph-mon[81715]: pgmap v862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:10 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:10.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:11 compute-1 ceph-mon[81715]: pgmap v863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:12 compute-1 podman[222407]: 2026-01-22 13:57:12.583686789 +0000 UTC m=+0.100870323 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:57:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:12.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:13 compute-1 ceph-mon[81715]: pgmap v864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:57:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:14 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:14.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:15 compute-1 ceph-mon[81715]: pgmap v865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 22 13:57:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:16.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:16.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:17 compute-1 ceph-mon[81715]: pgmap v866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-1 ceph-mon[81715]: pgmap v867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:18.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:19 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:20.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:20 compute-1 ceph-mon[81715]: pgmap v868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:22.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:22 compute-1 ceph-mon[81715]: pgmap v869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:24.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:24 compute-1 ceph-mon[81715]: pgmap v870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:27 compute-1 podman[222433]: 2026-01-22 13:57:27.062685671 +0000 UTC m=+0.052211706 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 13:57:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:27 compute-1 ceph-mon[81715]: pgmap v871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3781396254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2928817451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.384 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.385 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.523 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.524 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.524 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.524 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.524 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.525 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.525 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.949 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.949 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.966 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.967 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.995 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.996 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.996 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.997 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:57:27 compute-1 nova_compute[221400]: 2026-01-22 13:57:27.998 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:57:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:28 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2038610146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:28 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4110742391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:57:28 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/742374999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.455 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.614 221408 WARNING nova.virt.libvirt.driver [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.615 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5344MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.615 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.616 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.948 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.949 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:57:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:28 compute-1 nova_compute[221400]: 2026-01-22 13:57:28.984 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:57:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:29 compute-1 ceph-mon[81715]: pgmap v872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/742374999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:29 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:57:29 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2685310246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:29 compute-1 nova_compute[221400]: 2026-01-22 13:57:29.468 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:57:29 compute-1 nova_compute[221400]: 2026-01-22 13:57:29.474 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:57:29 compute-1 nova_compute[221400]: 2026-01-22 13:57:29.500 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:57:29 compute-1 nova_compute[221400]: 2026-01-22 13:57:29.502 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:57:29 compute-1 nova_compute[221400]: 2026-01-22 13:57:29.503 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2685310246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:30.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-1 ceph-mon[81715]: pgmap v873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.637178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251637237, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2490, "num_deletes": 251, "total_data_size": 5078444, "memory_usage": 5159968, "flush_reason": "Manual Compaction"}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251789122, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3317147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21069, "largest_seqno": 23554, "table_properties": {"data_size": 3307579, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24430, "raw_average_key_size": 21, "raw_value_size": 3286471, "raw_average_value_size": 2905, "num_data_blocks": 244, "num_entries": 1131, "num_filter_entries": 1131, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090073, "oldest_key_time": 1769090073, "file_creation_time": 1769090251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 151999 microseconds, and 8769 cpu microseconds.
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.789185) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3317147 bytes OK
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.789210) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.791589) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.791612) EVENT_LOG_v1 {"time_micros": 1769090251791605, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.791633) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 5067003, prev total WAL file size 5067003, number of live WAL files 2.
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.793071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(3239KB)], [42(8176KB)]
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251793136, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 11689645, "oldest_snapshot_seqno": -1}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5937 keys, 9843233 bytes, temperature: kUnknown
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251887942, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 9843233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804207, "index_size": 23108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 153682, "raw_average_key_size": 25, "raw_value_size": 9696645, "raw_average_value_size": 1633, "num_data_blocks": 927, "num_entries": 5937, "num_filter_entries": 5937, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.888241) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9843233 bytes
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.892683) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.2 rd, 103.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6456, records dropped: 519 output_compression: NoCompression
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.892731) EVENT_LOG_v1 {"time_micros": 1769090251892713, "job": 24, "event": "compaction_finished", "compaction_time_micros": 94906, "compaction_time_cpu_micros": 25945, "output_level": 6, "num_output_files": 1, "total_output_size": 9843233, "num_input_records": 6456, "num_output_records": 5937, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251893865, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251895435, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.792971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.895476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.895481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.895483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.895485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:57:31.895487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:34 compute-1 ceph-mon[81715]: pgmap v874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:34.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:34 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:34 compute-1 ceph-mon[81715]: pgmap v875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:36.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:37 compute-1 ceph-mon[81715]: pgmap v876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:57:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:57:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:39 compute-1 ceph-mon[81715]: pgmap v877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1249 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:41 compute-1 ceph-mon[81715]: pgmap v878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:42.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:43 compute-1 podman[222496]: 2026-01-22 13:57:43.102780982 +0000 UTC m=+0.085656600 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 13:57:43 compute-1 ceph-mon[81715]: pgmap v879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:44.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:44 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:44 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1254 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:44 compute-1 ceph-mon[81715]: pgmap v880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:46 compute-1 ceph-mon[81715]: pgmap v881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:47.430 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:47.430 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:47.430 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:48 compute-1 ceph-mon[81715]: pgmap v882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:48 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1259 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:50.222 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 13:57:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:50.224 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 13:57:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:57:50.225 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:57:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:50.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:51 compute-1 ceph-mon[81715]: pgmap v883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:52.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:53 compute-1 ceph-mon[81715]: pgmap v884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:54 compute-1 sudo[222522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:54 compute-1 sudo[222522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-1 sudo[222522]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-1 sudo[222547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:57:54 compute-1 sudo[222547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-1 sudo[222547]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-1 sudo[222572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:54 compute-1 sudo[222572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-1 sudo[222572]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-1 sudo[222597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:57:54 compute-1 sudo[222597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1264 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:55 compute-1 sudo[222597]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:55 compute-1 ceph-mon[81715]: pgmap v885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:57:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:57:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:57 compute-1 ceph-mon[81715]: pgmap v886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:58 compute-1 podman[222653]: 2026-01-22 13:57:58.070484918 +0000 UTC m=+0.056113333 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:57:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:57:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:59 compute-1 ceph-mon[81715]: pgmap v887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:59 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1269 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:01 compute-1 ceph-mon[81715]: pgmap v888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:01 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:02 compute-1 sudo[222672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:02 compute-1 sudo[222672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:02 compute-1 sudo[222672]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:02 compute-1 sudo[222697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:58:02 compute-1 sudo[222697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:02 compute-1 sudo[222697]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:02.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:58:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:58:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:03.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:03 compute-1 ceph-mon[81715]: pgmap v889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:04 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1274 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:04 compute-1 ceph-mon[81715]: pgmap v890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:06 compute-1 ceph-mon[81715]: pgmap v891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:08 compute-1 ceph-mon[81715]: pgmap v892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:08 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1279 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:11 compute-1 ceph-mon[81715]: pgmap v893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:13 compute-1 ceph-mon[81715]: pgmap v894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:14 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1284 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:14 compute-1 podman[222722]: 2026-01-22 13:58:14.100713172 +0000 UTC m=+0.086695048 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 13:58:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:15 compute-1 ceph-mon[81715]: pgmap v895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:17 compute-1 ceph-mon[81715]: pgmap v896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 13:58:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 13:58:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:18.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-1 ceph-mon[81715]: pgmap v897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1289 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:20.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:21.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:21 compute-1 ceph-mon[81715]: pgmap v898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:22 compute-1 ceph-mon[81715]: pgmap v899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:24 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1294 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:24 compute-1 ceph-mon[81715]: pgmap v900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:25.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:26.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:26 compute-1 ceph-mon[81715]: pgmap v901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2345469985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3226939497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1562183962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/711554436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.486 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.487 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.532 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.533 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.533 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.533 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.534 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:58:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:28.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:58:28 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2133378307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:28 compute-1 nova_compute[221400]: 2026-01-22 13:58:28.964 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:58:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:29 compute-1 podman[222771]: 2026-01-22 13:58:29.061944651 +0000 UTC m=+0.057429880 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.132 221408 WARNING nova.virt.libvirt.driver [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.133 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5347MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.134 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.134 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.245 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.245 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.268 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:58:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:29 compute-1 ceph-mon[81715]: pgmap v902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:29 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1299 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2133378307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:58:29 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3382157991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.711 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.717 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.738 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.740 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:58:29 compute-1 nova_compute[221400]: 2026-01-22 13:58:29.740 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:30 compute-1 nova_compute[221400]: 2026-01-22 13:58:30.203 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:30 compute-1 nova_compute[221400]: 2026-01-22 13:58:30.203 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:58:30 compute-1 nova_compute[221400]: 2026-01-22 13:58:30.203 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:58:30 compute-1 nova_compute[221400]: 2026-01-22 13:58:30.230 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:58:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3382157991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:30.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:31 compute-1 ceph-mon[81715]: pgmap v903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:33.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:33 compute-1 ceph-mon[81715]: pgmap v904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:34 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1304 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:34.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:35.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:35 compute-1 ceph-mon[81715]: pgmap v905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:36.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:37 compute-1 ceph-mon[81715]: pgmap v906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:38.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:39 compute-1 ceph-mon[81715]: pgmap v907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1309 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:40.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:40 compute-1 ceph-mon[81715]: pgmap v908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:42 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:42.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:43 compute-1 ceph-mon[81715]: pgmap v909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:44 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:44 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1314 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:44.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:45 compute-1 podman[222813]: 2026-01-22 13:58:45.11473456 +0000 UTC m=+0.108784741 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:58:45 compute-1 ceph-mon[81715]: pgmap v910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:46.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:58:47.431 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:58:47.432 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:58:47.432 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:48 compute-1 ceph-mon[81715]: pgmap v911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:49 compute-1 ceph-mon[81715]: pgmap v912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:49 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1319 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:50.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:51 compute-1 ceph-mon[81715]: pgmap v913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:52.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:53 compute-1 ceph-mon[81715]: pgmap v914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:54 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:54 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1324 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:54.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:55 compute-1 ceph-mon[81715]: pgmap v915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:56.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:57 compute-1 ceph-mon[81715]: pgmap v916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.893164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338893262, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1381, "num_deletes": 256, "total_data_size": 2436844, "memory_usage": 2475840, "flush_reason": "Manual Compaction"}
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338907980, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1600863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23559, "largest_seqno": 24935, "table_properties": {"data_size": 1595448, "index_size": 2619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13862, "raw_average_key_size": 20, "raw_value_size": 1583357, "raw_average_value_size": 2291, "num_data_blocks": 116, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090252, "oldest_key_time": 1769090252, "file_creation_time": 1769090338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14836 microseconds, and 5203 cpu microseconds.
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.908037) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1600863 bytes OK
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.908057) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.909787) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.909800) EVENT_LOG_v1 {"time_micros": 1769090338909796, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.909819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2430125, prev total WAL file size 2430125, number of live WAL files 2.
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.910600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1563KB)], [45(9612KB)]
Jan 22 13:58:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338910718, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 11444096, "oldest_snapshot_seqno": -1}
Jan 22 13:58:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 6103 keys, 11294353 bytes, temperature: kUnknown
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339006499, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 11294353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11252851, "index_size": 25136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 158877, "raw_average_key_size": 26, "raw_value_size": 11140920, "raw_average_value_size": 1825, "num_data_blocks": 1009, "num_entries": 6103, "num_filter_entries": 6103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.007992) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 11294353 bytes
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.009497) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(14.2) write-amplify(7.1) OK, records in: 6628, records dropped: 525 output_compression: NoCompression
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.009531) EVENT_LOG_v1 {"time_micros": 1769090339009517, "job": 26, "event": "compaction_finished", "compaction_time_micros": 95911, "compaction_time_cpu_micros": 39374, "output_level": 6, "num_output_files": 1, "total_output_size": 11294353, "num_input_records": 6628, "num_output_records": 6103, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339010641, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339013698, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:58.910533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.013796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.013804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.013806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.013808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:58:59.013811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:58:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:59 compute-1 ceph-mon[81715]: pgmap v917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:59 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:59 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1329 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:00 compute-1 podman[222840]: 2026-01-22 13:59:00.07818671 +0000 UTC m=+0.062081296 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 13:59:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:01 compute-1 ceph-mon[81715]: pgmap v918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:01 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:02.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:02 compute-1 sudo[222859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:03 compute-1 sudo[222859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-1 sudo[222859]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-1 sudo[222884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:59:03 compute-1 sudo[222884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-1 sudo[222884]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:03.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:03 compute-1 sudo[222909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:03 compute-1 sudo[222909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-1 sudo[222909]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-1 sudo[222934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:59:03 compute-1 sudo[222934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-1 sudo[222934]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-1 ceph-mon[81715]: pgmap v919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:05 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1334 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:59:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:59:05 compute-1 ceph-mon[81715]: pgmap v920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:06.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:07 compute-1 ceph-mon[81715]: pgmap v921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:08.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:09 compute-1 ceph-mon[81715]: pgmap v922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:09 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:10 compute-1 sudo[222990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:10 compute-1 sudo[222990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:10 compute-1 sudo[222990]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:10 compute-1 sudo[223015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:59:10 compute-1 sudo[223015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:11 compute-1 sudo[223015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:11 compute-1 ceph-mon[81715]: pgmap v923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:59:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:12.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:59:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:13 compute-1 ceph-mon[81715]: pgmap v924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:14 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:14.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:15.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:15 compute-1 ceph-mon[81715]: pgmap v925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:16 compute-1 podman[223040]: 2026-01-22 13:59:16.111927 +0000 UTC m=+0.101522422 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 13:59:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:16.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:17 compute-1 ceph-mon[81715]: pgmap v926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 13:59:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 13:59:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:18.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:19 compute-1 ceph-mon[81715]: pgmap v927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:19 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:19 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:20.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:21 compute-1 ceph-mon[81715]: pgmap v928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:22.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:22 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:22 compute-1 ceph-mon[81715]: pgmap v929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.906463) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363906535, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 250, "total_data_size": 743795, "memory_usage": 755392, "flush_reason": "Manual Compaction"}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363911634, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 395730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24940, "largest_seqno": 25528, "table_properties": {"data_size": 392899, "index_size": 739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7937, "raw_average_key_size": 20, "raw_value_size": 386850, "raw_average_value_size": 999, "num_data_blocks": 31, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090339, "oldest_key_time": 1769090339, "file_creation_time": 1769090363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 5219 microseconds, and 2108 cpu microseconds.
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.911699) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 395730 bytes OK
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.911720) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.912786) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.912799) EVENT_LOG_v1 {"time_micros": 1769090363912795, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.912819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 740354, prev total WAL file size 740354, number of live WAL files 2.
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(386KB)], [48(10MB)]
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363913559, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11690083, "oldest_snapshot_seqno": -1}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5983 keys, 7893370 bytes, temperature: kUnknown
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363965275, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 7893370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7857109, "index_size": 20215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 157000, "raw_average_key_size": 26, "raw_value_size": 7751620, "raw_average_value_size": 1295, "num_data_blocks": 793, "num_entries": 5983, "num_filter_entries": 5983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.965783) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7893370 bytes
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.967979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.0 rd, 151.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(49.5) write-amplify(19.9) OK, records in: 6490, records dropped: 507 output_compression: NoCompression
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.968015) EVENT_LOG_v1 {"time_micros": 1769090363968000, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51948, "compaction_time_cpu_micros": 20237, "output_level": 6, "num_output_files": 1, "total_output_size": 7893370, "num_input_records": 6490, "num_output_records": 5983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363968226, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363970076, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.970114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.970118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.970120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.970122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:59:23.970124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:24 compute-1 ceph-mon[81715]: pgmap v930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - - [22/Jan/2026:13:59:25.851 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 22 13:59:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:26 compute-1 nova_compute[221400]: 2026-01-22 13:59:26.950 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:27 compute-1 ceph-mon[81715]: pgmap v931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:27 compute-1 nova_compute[221400]: 2026-01-22 13:59:27.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:27 compute-1 nova_compute[221400]: 2026-01-22 13:59:27.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:27 compute-1 nova_compute[221400]: 2026-01-22 13:59:27.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.001 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.001 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.002 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.002 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.002 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:59:28 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:59:28 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4186869810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.434 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.588 221408 WARNING nova.virt.libvirt.driver [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.589 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5361MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.589 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.590 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.684 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.684 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:59:28 compute-1 nova_compute[221400]: 2026-01-22 13:59:28.699 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:59:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:28.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:29 compute-1 ceph-mon[81715]: pgmap v932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4186869810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1265044308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3122242723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:59:29 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4002500831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-1 nova_compute[221400]: 2026-01-22 13:59:29.127 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:59:29 compute-1 nova_compute[221400]: 2026-01-22 13:59:29.132 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:59:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:29 compute-1 nova_compute[221400]: 2026-01-22 13:59:29.156 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:59:29 compute-1 nova_compute[221400]: 2026-01-22 13:59:29.157 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:59:29 compute-1 nova_compute[221400]: 2026-01-22 13:59:29.158 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.154 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.154 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.154 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.155 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.182 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.183 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.183 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.183 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.184 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:59:30 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4002500831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4290223506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2587536646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:30 compute-1 nova_compute[221400]: 2026-01-22 13:59:30.973 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:31 compute-1 podman[223110]: 2026-01-22 13:59:31.06144727 +0000 UTC m=+0.055026232 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:59:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Jan 22 13:59:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:31 compute-1 ceph-mon[81715]: pgmap v933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:32 compute-1 ceph-mon[81715]: osdmap e140: 3 total, 3 up, 3 in
Jan 22 13:59:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Jan 22 13:59:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:32.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:33 compute-1 ceph-mon[81715]: pgmap v935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:33 compute-1 ceph-mon[81715]: osdmap e141: 3 total, 3 up, 3 in
Jan 22 13:59:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Jan 22 13:59:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:34 compute-1 ceph-mon[81715]: osdmap e142: 3 total, 3 up, 3 in
Jan 22 13:59:34 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:34 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:34.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:35 compute-1 ceph-mon[81715]: pgmap v938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 1.3 MiB/s wr, 0 op/s
Jan 22 13:59:35 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:36 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Jan 22 13:59:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:36.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:37 compute-1 ceph-mon[81715]: pgmap v939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 174 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Jan 22 13:59:37 compute-1 ceph-mon[81715]: osdmap e143: 3 total, 3 up, 3 in
Jan 22 13:59:37 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:38 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Jan 22 13:59:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:39 compute-1 ceph-mon[81715]: pgmap v941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 174 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Jan 22 13:59:39 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:39 compute-1 ceph-mon[81715]: osdmap e144: 3 total, 3 up, 3 in
Jan 22 13:59:39 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:40 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:41 compute-1 ceph-mon[81715]: pgmap v943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 25 MiB data, 178 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 47 op/s
Jan 22 13:59:41 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:43 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:43 compute-1 ceph-mon[81715]: pgmap v944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 22 13:59:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:44 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:44 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:45.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:45 compute-1 ceph-mon[81715]: pgmap v945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 22 13:59:46 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:46.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:47 compute-1 podman[223131]: 2026-01-22 13:59:47.143728426 +0000 UTC m=+0.117603851 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 13:59:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:47.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:47 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:47 compute-1 ceph-mon[81715]: pgmap v946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 18 op/s
Jan 22 13:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:59:47.433 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:59:47.434 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 13:59:47.434 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:48 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:48.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:49.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:49 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:49 compute-1 ceph-mon[81715]: pgmap v947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 22 13:59:49 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:50 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:51.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:51 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:51 compute-1 ceph-mon[81715]: pgmap v948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 1.4 MiB/s wr, 5 op/s
Jan 22 13:59:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:52 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:53.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:53 compute-1 ceph-mon[81715]: pgmap v949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 1.3 MiB/s wr, 4 op/s
Jan 22 13:59:53 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:59:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:54.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:59:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:55 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:55 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:56 compute-1 ceph-mon[81715]: pgmap v950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:56 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:57.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:57 compute-1 ceph-mon[81715]: pgmap v951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:57 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:59:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:59:58 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:58 compute-1 ceph-mon[81715]: pgmap v952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:58 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 13:59:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:59.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:00 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:00.594 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:00:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:00.595 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:00:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:01 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops
Jan 22 14:00:01 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops
Jan 22 14:00:01 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:01 compute-1 ceph-mon[81715]: pgmap v953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:01 compute-1 anacron[8883]: Job `cron.monthly' started
Jan 22 14:00:01 compute-1 anacron[8883]: Job `cron.monthly' terminated
Jan 22 14:00:01 compute-1 anacron[8883]: Normal exit (3 jobs run)
Jan 22 14:00:01 compute-1 podman[223159]: 2026-01-22 14:00:01.471736572 +0000 UTC m=+0.088151409 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:00:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:02 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:03 compute-1 ceph-mon[81715]: pgmap v954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:03 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:04.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:05 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:05 compute-1 ceph-mon[81715]: pgmap v955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:06 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2336252334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:07 compute-1 ceph-mon[81715]: pgmap v956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:08 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:08.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:09 compute-1 ceph-mon[81715]: pgmap v957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:09 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:10 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:10.597 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:00:11 compute-1 sudo[223179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:11 compute-1 sudo[223179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:11 compute-1 sudo[223179]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:10.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:11 compute-1 sudo[223204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:00:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:11 compute-1 sudo[223204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:11 compute-1 sudo[223204]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:11.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:11 compute-1 sudo[223229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:11 compute-1 sudo[223229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-1 sudo[223229]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-1 sudo[223254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:00:11 compute-1 sudo[223254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:11 compute-1 sudo[223254]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:13.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:13 compute-1 ceph-mon[81715]: pgmap v958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:00:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Jan 22 14:00:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:14 compute-1 ceph-mon[81715]: pgmap v959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 5 op/s
Jan 22 14:00:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:14 compute-1 ceph-mon[81715]: osdmap e145: 3 total, 3 up, 3 in
Jan 22 14:00:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Jan 22 14:00:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:15 compute-1 ceph-mon[81715]: pgmap v961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 7 op/s
Jan 22 14:00:16 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:16 compute-1 ceph-mon[81715]: osdmap e146: 3 total, 3 up, 3 in
Jan 22 14:00:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:18 compute-1 podman[223312]: 2026-01-22 14:00:18.10050906 +0000 UTC m=+0.094276686 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:00:18 compute-1 ceph-mon[81715]: pgmap v963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:00:18 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:18 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:18.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:00:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:00:19 compute-1 ceph-mon[81715]: pgmap v964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:00:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:21.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.626769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421626837, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1032, "num_deletes": 251, "total_data_size": 1751567, "memory_usage": 1775728, "flush_reason": "Manual Compaction"}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421647791, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1150713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25533, "largest_seqno": 26560, "table_properties": {"data_size": 1146145, "index_size": 2028, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11771, "raw_average_key_size": 20, "raw_value_size": 1136234, "raw_average_value_size": 2000, "num_data_blocks": 89, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090363, "oldest_key_time": 1769090363, "file_creation_time": 1769090421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 21090 microseconds, and 6819 cpu microseconds.
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.647861) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1150713 bytes OK
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.647888) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.657583) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.657636) EVENT_LOG_v1 {"time_micros": 1769090421657625, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.657688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1746240, prev total WAL file size 1746240, number of live WAL files 2.
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.658781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1123KB)], [51(7708KB)]
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421658933, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 9044083, "oldest_snapshot_seqno": -1}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 6034 keys, 7301377 bytes, temperature: kUnknown
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421706107, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 7301377, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7265236, "index_size": 19967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 159309, "raw_average_key_size": 26, "raw_value_size": 7159078, "raw_average_value_size": 1186, "num_data_blocks": 778, "num_entries": 6034, "num_filter_entries": 6034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.706819) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7301377 bytes
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.708623) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.3 rd, 154.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.5 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.2) write-amplify(6.3) OK, records in: 6551, records dropped: 517 output_compression: NoCompression
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.708671) EVENT_LOG_v1 {"time_micros": 1769090421708643, "job": 30, "event": "compaction_finished", "compaction_time_micros": 47266, "compaction_time_cpu_micros": 22814, "output_level": 6, "num_output_files": 1, "total_output_size": 7301377, "num_input_records": 6551, "num_output_records": 6034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421708985, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421710468, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.658553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.710496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.710501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.710503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.710504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:00:21.710506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:22 compute-1 ceph-mon[81715]: pgmap v965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 3 op/s
Jan 22 14:00:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:23.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:23 compute-1 ceph-mon[81715]: pgmap v966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 618 B/s wr, 2 op/s
Jan 22 14:00:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Jan 22 14:00:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:24 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:24.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:25.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:25 compute-1 sudo[223339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:25 compute-1 sudo[223339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:25 compute-1 sudo[223339]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:25 compute-1 sudo[223364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:00:25 compute-1 sudo[223364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:25 compute-1 sudo[223364]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:25 compute-1 ceph-mon[81715]: pgmap v967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 511 B/s wr, 1 op/s
Jan 22 14:00:25 compute-1 ceph-mon[81715]: osdmap e147: 3 total, 3 up, 3 in
Jan 22 14:00:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.950 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.950 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.978 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.980 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.980 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:00:25 compute-1 nova_compute[221400]: 2026-01-22 14:00:25.994 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:27.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.004 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.004 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-1 ceph-mon[81715]: pgmap v969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.950 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.950 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-1 nova_compute[221400]: 2026-01-22 14:00:28.950 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:00:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:29 compute-1 ceph-mon[81715]: pgmap v970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3842263417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:29 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.949 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.950 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.950 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.966 221408 DEBUG nova.compute.manager [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.967 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.967 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.991 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.991 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.992 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.992 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:00:29 compute-1 nova_compute[221400]: 2026-01-22 14:00:29.992 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:30 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1731822576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.436 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.586 221408 WARNING nova.virt.libvirt.driver [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.587 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.588 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.588 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.743 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.744 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.810 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Refreshing inventories for resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.866 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Updating ProviderTree inventory for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.867 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Updating inventory in ProviderTree for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.885 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Refreshing aggregate associations for resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.905 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Refreshing trait associations for resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:00:30 compute-1 nova_compute[221400]: 2026-01-22 14:00:30.923 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/726780379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1731822576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:31 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3469774285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:31 compute-1 nova_compute[221400]: 2026-01-22 14:00:31.376 221408 DEBUG oslo_concurrency.processutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:31 compute-1 nova_compute[221400]: 2026-01-22 14:00:31.382 221408 DEBUG nova.compute.provider_tree [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:00:31 compute-1 nova_compute[221400]: 2026-01-22 14:00:31.412 221408 DEBUG nova.scheduler.client.report [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:00:31 compute-1 nova_compute[221400]: 2026-01-22 14:00:31.413 221408 DEBUG nova.compute.resource_tracker [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:00:31 compute-1 nova_compute[221400]: 2026-01-22 14:00:31.413 221408 DEBUG oslo_concurrency.lockutils [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:32 compute-1 podman[223433]: 2026-01-22 14:00:32.071774925 +0000 UTC m=+0.057927933 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 22 14:00:32 compute-1 ceph-mon[81715]: pgmap v971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Jan 22 14:00:32 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1032470193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:32 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3469774285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:32 compute-1 nova_compute[221400]: 2026-01-22 14:00:32.408 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:32.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3949338357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:33 compute-1 ceph-mon[81715]: pgmap v972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:34 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:34.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:36 compute-1 ceph-mon[81715]: pgmap v973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:37.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:37.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:38 compute-1 ceph-mon[81715]: pgmap v974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:39.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-1 ceph-mon[81715]: pgmap v975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:40 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:41.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:41.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:41 compute-1 ceph-mon[81715]: pgmap v976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:43.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Jan 22 14:00:43 compute-1 ceph-mon[81715]: pgmap v977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:43 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:45.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:45 compute-1 ceph-mon[81715]: osdmap e148: 3 total, 3 up, 3 in
Jan 22 14:00:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:45 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Jan 22 14:00:46 compute-1 ceph-mon[81715]: pgmap v979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:46 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:46 compute-1 ceph-mon[81715]: osdmap e149: 3 total, 3 up, 3 in
Jan 22 14:00:46 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:47.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:47.434 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:47.435 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:00:47.435 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:47 compute-1 ceph-mon[81715]: pgmap v981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Jan 22 14:00:47 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:48 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:49.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:49 compute-1 podman[223452]: 2026-01-22 14:00:49.134873222 +0000 UTC m=+0.121546251 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:00:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:49.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:50 compute-1 ceph-mon[81715]: pgmap v982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Jan 22 14:00:50 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:50 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:51.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:51.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:51 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:51 compute-1 ceph-mon[81715]: pgmap v983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 639 B/s wr, 1 op/s
Jan 22 14:00:52 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:52 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:53.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:53 compute-1 ceph-mon[81715]: pgmap v984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 365 B/s rd, 731 B/s wr, 1 op/s
Jan 22 14:00:53 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Jan 22 14:00:54 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:54 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 1444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:54 compute-1 ceph-mon[81715]: osdmap e150: 3 total, 3 up, 3 in
Jan 22 14:00:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:55.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:55 compute-1 ceph-mon[81715]: pgmap v985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 614 B/s wr, 1 op/s
Jan 22 14:00:55 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:56 compute-1 sshd-session[223478]: Invalid user admin from 85.155.224.38 port 36220
Jan 22 14:00:56 compute-1 sshd-session[223478]: Connection closed by invalid user admin 85.155.224.38 port 36220 [preauth]
Jan 22 14:00:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:57.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:57 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:57 compute-1 ceph-mon[81715]: pgmap v987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:57.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.289 221408 DEBUG oslo_concurrency.lockutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Acquiring lock "a400551a-d18e-4cc5-a66c-d338f22a5bab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.290 221408 DEBUG oslo_concurrency.lockutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Lock "a400551a-d18e-4cc5-a66c-d338f22a5bab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.400 221408 DEBUG nova.compute.manager [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.541 221408 DEBUG oslo_concurrency.lockutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.542 221408 DEBUG oslo_concurrency.lockutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.550 221408 DEBUG nova.virt.hardware [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.551 221408 INFO nova.compute.claims [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Claim successful on node compute-1.ctlplane.example.com
Jan 22 14:00:57 compute-1 nova_compute[221400]: 2026-01-22 14:00:57.711 221408 DEBUG oslo_concurrency.processutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.143 221408 DEBUG oslo_concurrency.processutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.149 221408 DEBUG nova.compute.provider_tree [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Inventory has not changed in ProviderTree for provider: 9903a6f8-fb0a-4d8e-b632-398eaedd969e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.328 221408 DEBUG nova.scheduler.client.report [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Inventory has not changed for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:00:58 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:58 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/646427276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/473326240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.517 221408 DEBUG oslo_concurrency.lockutils [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.518 221408 DEBUG nova.compute.manager [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.629 221408 DEBUG nova.compute.manager [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.629 221408 DEBUG nova.network.neutron [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.845 221408 INFO nova.virt.libvirt.driver [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:00:58 compute-1 nova_compute[221400]: 2026-01-22 14:00:58.877 221408 DEBUG nova.compute.manager [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:00:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:59 compute-1 nova_compute[221400]: 2026-01-22 14:00:59.028 221408 DEBUG nova.compute.manager [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:00:59 compute-1 nova_compute[221400]: 2026-01-22 14:00:59.030 221408 DEBUG nova.virt.libvirt.driver [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:00:59 compute-1 nova_compute[221400]: 2026-01-22 14:00:59.030 221408 INFO nova.virt.libvirt.driver [None req-df81c1cd-a0ac-4e96-9f80-9517e4631bd1 8282cfb5b1e345c0b703e3083290f091 5ac1591ef8e94c9f9a1e29bfdcf7abf4 - - default default] [instance: a400551a-d18e-4cc5-a66c-d338f22a5bab] Creating image(s)
Jan 22 14:00:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:59.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:00:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:59 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:59 compute-1 ceph-mon[81715]: pgmap v988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:59 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:01:00 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 1449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:00 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:01.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:01 compute-1 CROND[223519]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 14:01:01 compute-1 run-parts[223522]: (/etc/cron.hourly) starting 0anacron
Jan 22 14:01:01 compute-1 run-parts[223528]: (/etc/cron.hourly) finished 0anacron
Jan 22 14:01:01 compute-1 CROND[223518]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 14:01:01 compute-1 ceph-mon[81715]: pgmap v989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 102 B/s wr, 0 op/s
Jan 22 14:01:01 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:03.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:03 compute-1 podman[223529]: 2026-01-22 14:01:03.092610009 +0000 UTC m=+0.078543146 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:01:03 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:03.404 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:01:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:03.405 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:01:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:04 compute-1 ceph-mon[81715]: pgmap v990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 102 B/s wr, 7 op/s
Jan 22 14:01:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:05.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:05.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:05 compute-1 ceph-mon[81715]: pgmap v991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 51 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 874 KiB/s wr, 8 op/s
Jan 22 14:01:05 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:05 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:07 compute-1 ceph-mon[81715]: pgmap v992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 23 op/s
Jan 22 14:01:07 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:08 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:09.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:10 compute-1 ceph-mon[81715]: pgmap v993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:10 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:10 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:10 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:10.408 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:01:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:11.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:11 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:11 compute-1 ceph-mon[81715]: pgmap v994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:13 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:13.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:13.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:14 compute-1 ceph-mon[81715]: pgmap v995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:14 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:15.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:15.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:15 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:15 compute-1 ceph-mon[81715]: pgmap v996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 22 14:01:15 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:16 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:17 compute-1 ceph-mon[81715]: pgmap v997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 15 op/s
Jan 22 14:01:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:17 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1460541179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:19.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2016890132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:01:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2016890132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:01:19 compute-1 ceph-mon[81715]: pgmap v998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:20 compute-1 podman[223548]: 2026-01-22 14:01:20.155219655 +0000 UTC m=+0.144777935 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 14:01:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:20 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:21 compute-1 ceph-mon[81715]: pgmap v999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 91 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 114 KiB/s wr, 1 op/s
Jan 22 14:01:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:23 compute-1 ceph-mon[81715]: pgmap v1000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 231 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:25.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:25 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:25 compute-1 ceph-mon[81715]: pgmap v1001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:25 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:25 compute-1 sudo[223575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:25 compute-1 sudo[223575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-1 sudo[223575]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-1 sudo[223600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:01:25 compute-1 sudo[223600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-1 sudo[223600]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-1 sudo[223625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:25 compute-1 sudo[223625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-1 sudo[223625]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-1 sudo[223650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:01:25 compute-1 sudo[223650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:26 compute-1 sudo[223650]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:27.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:27 compute-1 sshd-session[223705]: Invalid user orangepi from 85.155.224.38 port 45588
Jan 22 14:01:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:27 compute-1 ceph-mon[81715]: pgmap v1002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:01:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:01:27 compute-1 sshd-session[223705]: Connection closed by invalid user orangepi 85.155.224.38 port 45588 [preauth]
Jan 22 14:01:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:28 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:29.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:29 compute-1 ceph-mon[81715]: pgmap v1003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:29 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1207234773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:30 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3234656352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2414432863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:31.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:31 compute-1 ceph-mon[81715]: pgmap v1004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:32 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:33 compute-1 sudo[223707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:33 compute-1 sudo[223707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:33 compute-1 sudo[223707]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:33 compute-1 sudo[223738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:01:33 compute-1 podman[223731]: 2026-01-22 14:01:33.390632966 +0000 UTC m=+0.078248828 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:01:33 compute-1 sudo[223738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:33 compute-1 sudo[223738]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:33 compute-1 ceph-mon[81715]: pgmap v1005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 7.9 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 22 14:01:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1550481817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2455143070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:34 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:34 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 1484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:35.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:35 compute-1 ceph-mon[81715]: pgmap v1006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:35 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:36 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:37.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:37 compute-1 ceph-mon[81715]: pgmap v1007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:37 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:38 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:39.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:39.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:40 compute-1 ceph-mon[81715]: pgmap v1008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:40 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:40 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 1489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:41.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:41 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:41 compute-1 ceph-mon[81715]: pgmap v1009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:42 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:42 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:43.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:43 compute-1 ceph-mon[81715]: pgmap v1010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:43 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:44 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:44 compute-1 ceph-mon[81715]: pgmap v1011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:44 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 1494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:46 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:47 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:47 compute-1 ceph-mon[81715]: pgmap v1012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:47.435 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:47.436 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:01:47.436 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:01:48 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:49 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:49 compute-1 ceph-mon[81715]: pgmap v1013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:50 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:50 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 1499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:51 compute-1 podman[223776]: 2026-01-22 14:01:51.096107717 +0000 UTC m=+0.078945687 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:01:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:51 compute-1 ceph-mon[81715]: pgmap v1014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:53.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:53 compute-1 ceph-mon[81715]: pgmap v1015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:54 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:55.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:55 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:55 compute-1 ceph-mon[81715]: pgmap v1016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:55 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:57.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:57 compute-1 ceph-mon[81715]: pgmap v1017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:58 compute-1 sshd-session[223804]: Connection closed by authenticating user root 85.155.224.38 port 57754 [preauth]
Jan 22 14:01:58 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:59.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:01:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:59 compute-1 ceph-mon[81715]: pgmap v1018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:59 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:01 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:02 compute-1 ceph-mon[81715]: pgmap v1019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:03 compute-1 ceph-mon[81715]: pgmap v1020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:04 compute-1 podman[223806]: 2026-01-22 14:02:04.088967786 +0000 UTC m=+0.080738242 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:02:04 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:05 compute-1 ceph-mon[81715]: pgmap v1021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:05 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:06 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:07.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:07 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:07 compute-1 ceph-mon[81715]: pgmap v1022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:08 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:08 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:09.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:09 compute-1 ceph-mon[81715]: pgmap v1023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:09 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:11 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1519 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:11 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:12 compute-1 ceph-mon[81715]: pgmap v1024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:12 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-1 ceph-mon[81715]: pgmap v1025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:14 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1524 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:15 compute-1 ceph-mon[81715]: pgmap v1026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:15 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:18 compute-1 ceph-mon[81715]: pgmap v1027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:18 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:19.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:19 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:19 compute-1 ceph-mon[81715]: pgmap v1028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:02:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:02:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:20 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:20 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:21.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:22 compute-1 podman[223826]: 2026-01-22 14:02:22.092407439 +0000 UTC m=+0.085969056 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:02:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:22 compute-1 ceph-mon[81715]: pgmap v1029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:23.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:24 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:24 compute-1 ceph-mon[81715]: pgmap v1030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:25.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:26 compute-1 ceph-mon[81715]: pgmap v1031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: pgmap v1032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:28 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:29.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:29.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:30 compute-1 ceph-mon[81715]: pgmap v1033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:30 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3658520421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:31 compute-1 ceph-mon[81715]: pgmap v1034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2414825092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:31.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:32 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:33 compute-1 ceph-mon[81715]: pgmap v1035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3545890113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:33 compute-1 sudo[223853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:33 compute-1 sudo[223853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-1 sudo[223853]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-1 sudo[223878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:02:33 compute-1 sudo[223878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-1 sudo[223878]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-1 sudo[223903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:33 compute-1 sudo[223903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-1 sudo[223903]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-1 sudo[223928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:02:33 compute-1 sudo[223928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:34 compute-1 sudo[223928]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:34 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:34 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1840054656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:34 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:35 compute-1 podman[223984]: 2026-01-22 14:02:35.069514436 +0000 UTC m=+0.057965949 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:02:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:36 compute-1 ceph-mon[81715]: pgmap v1036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:36 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:37.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:37 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:37 compute-1 ceph-mon[81715]: pgmap v1037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:39 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:39 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:02:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:02:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:39.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:40 compute-1 ceph-mon[81715]: pgmap v1038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:40 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:40 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:41.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:41.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:42 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:42 compute-1 ceph-mon[81715]: pgmap v1039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:43 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-1 ceph-mon[81715]: pgmap v1040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:43.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:44 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:44 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:45.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:45 compute-1 ceph-mon[81715]: pgmap v1041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:45 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:45 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:45 compute-1 sudo[224003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:45 compute-1 sudo[224003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:45 compute-1 sudo[224003]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:45 compute-1 sudo[224028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:02:45 compute-1 sudo[224028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:45 compute-1 sudo[224028]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:46 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:02:47.436 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:02:47.437 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:02:47.437 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:02:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:47.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:47 compute-1 ceph-mon[81715]: pgmap v1042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:47 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:49.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:50 compute-1 ceph-mon[81715]: pgmap v1043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:50 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:50 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:51 compute-1 ceph-mon[81715]: pgmap v1044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:51.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:53 compute-1 podman[224053]: 2026-01-22 14:02:53.115722787 +0000 UTC m=+0.106090986 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:02:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:53 compute-1 ceph-mon[81715]: pgmap v1045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:53.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:54 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:55.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:55 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:55 compute-1 ceph-mon[81715]: pgmap v1046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:55 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:55.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:57.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:58 compute-1 ceph-mon[81715]: pgmap v1047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:58 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:02:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:59 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:59 compute-1 ceph-mon[81715]: pgmap v1048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:01 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:02 compute-1 ceph-mon[81715]: pgmap v1049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-1 ceph-mon[81715]: pgmap v1050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:04 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:05.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:05 compute-1 ceph-mon[81715]: pgmap v1051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:05 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:06 compute-1 podman[224081]: 2026-01-22 14:03:06.061402663 +0000 UTC m=+0.050849563 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:03:06 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:07 compute-1 ceph-mon[81715]: pgmap v1052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:07 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:08 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:09 compute-1 ceph-mon[81715]: pgmap v1053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:09 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:09 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1579 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:10 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:12 compute-1 ceph-mon[81715]: pgmap v1054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:12 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:13.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:13 compute-1 ceph-mon[81715]: pgmap v1055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:15.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:15 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-1 ceph-mon[81715]: pgmap v1056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:15 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:15 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:16 compute-1 ceph-mon[81715]: pgmap v1057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:17.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:17 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:03:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:03:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-1 ceph-mon[81715]: pgmap v1058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:18 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 22 14:03:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:18.992855) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:03:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 22 14:03:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090598992912, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2502, "num_deletes": 251, "total_data_size": 5103665, "memory_usage": 5172304, "flush_reason": "Manual Compaction"}
Jan 22 14:03:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599011937, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3322743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26565, "largest_seqno": 29062, "table_properties": {"data_size": 3313165, "index_size": 5624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24372, "raw_average_key_size": 21, "raw_value_size": 3291995, "raw_average_value_size": 2910, "num_data_blocks": 246, "num_entries": 1131, "num_filter_entries": 1131, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090422, "oldest_key_time": 1769090422, "file_creation_time": 1769090598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 19119 microseconds, and 8643 cpu microseconds.
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.011989) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3322743 bytes OK
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.012007) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.013431) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.013444) EVENT_LOG_v1 {"time_micros": 1769090599013440, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.013460) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5092188, prev total WAL file size 5092188, number of live WAL files 2.
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.014585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3244KB)], [54(7130KB)]
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599014645, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 10624120, "oldest_snapshot_seqno": -1}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6645 keys, 8912914 bytes, temperature: kUnknown
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599081314, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 8912914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8871945, "index_size": 23257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 174064, "raw_average_key_size": 26, "raw_value_size": 8754051, "raw_average_value_size": 1317, "num_data_blocks": 917, "num_entries": 6645, "num_filter_entries": 6645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090599, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.081998) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8912914 bytes
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.083226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 7165, records dropped: 520 output_compression: NoCompression
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.083245) EVENT_LOG_v1 {"time_micros": 1769090599083235, "job": 32, "event": "compaction_finished", "compaction_time_micros": 66747, "compaction_time_cpu_micros": 25488, "output_level": 6, "num_output_files": 1, "total_output_size": 8912914, "num_input_records": 7165, "num_output_records": 6645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599084069, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599085241, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.014540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.085356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.085362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.085364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.085366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:03:19.085367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:19.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:20 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:20 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:21.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:21 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:21 compute-1 ceph-mon[81715]: pgmap v1059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:21.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:23.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:23 compute-1 ceph-mon[81715]: pgmap v1060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:23 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:24 compute-1 podman[224102]: 2026-01-22 14:03:24.111637402 +0000 UTC m=+0.094081017 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:03:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:25.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:25 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:25 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:26 compute-1 ceph-mon[81715]: pgmap v1061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:26 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:03:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:27.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:03:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:27.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-1 ceph-mon[81715]: pgmap v1062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:28 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:29.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:29 compute-1 ceph-mon[81715]: pgmap v1063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:29 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:29 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:30 compute-1 rsyslogd[1007]: imjournal: 2792 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 22 14:03:30 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/393160789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:31 compute-1 ceph-mon[81715]: pgmap v1064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3818535632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:32 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:33.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:33.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:33 compute-1 ceph-mon[81715]: pgmap v1065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/364104935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:34 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:34 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2280181550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:35.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:35.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:35 compute-1 ceph-mon[81715]: pgmap v1066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:35 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:36 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:36 compute-1 ceph-mon[81715]: pgmap v1067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:37 compute-1 podman[224128]: 2026-01-22 14:03:37.064514553 +0000 UTC m=+0.053352092 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 14:03:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:37.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:37 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:39 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:39 compute-1 ceph-mon[81715]: pgmap v1068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:39.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:39.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:40 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:40 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:41.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:41.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:41 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:41 compute-1 ceph-mon[81715]: pgmap v1069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:42 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:42 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:43.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:43.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:43 compute-1 ceph-mon[81715]: pgmap v1070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:43 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:45 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:45 compute-1 ceph-mon[81715]: pgmap v1071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:45 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:45 compute-1 sudo[224148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:45 compute-1 sudo[224148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:45 compute-1 sudo[224148]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-1 sudo[224173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:03:46 compute-1 sudo[224173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-1 sudo[224173]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-1 sudo[224198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:46 compute-1 sudo[224198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-1 sudo[224198]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-1 sudo[224223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:03:46 compute-1 sudo[224223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:46 compute-1 podman[224319]: 2026-01-22 14:03:46.644577282 +0000 UTC m=+0.064271591 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 14:03:46 compute-1 podman[224319]: 2026-01-22 14:03:46.745043454 +0000 UTC m=+0.164737743 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:03:47 compute-1 sudo[224223]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:47 compute-1 sudo[224440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:47 compute-1 sudo[224440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:47 compute-1 sudo[224440]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:47 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:47 compute-1 ceph-mon[81715]: pgmap v1072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:47 compute-1 sudo[224465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:03:47 compute-1 sudo[224465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:47 compute-1 sudo[224465]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:47 compute-1 sudo[224490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:47 compute-1 sudo[224490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:47 compute-1 sudo[224490]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.438 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:03:47 compute-1 sudo[224515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:03:47 compute-1 sudo[224515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:47.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:47 compute-1 sudo[224515]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:48 compute-1 sudo[224572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:48 compute-1 sudo[224572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:48 compute-1 sudo[224572]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:48 compute-1 sudo[224597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:03:48 compute-1 sudo[224597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:48 compute-1 sudo[224597]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:48 compute-1 sudo[224622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:48 compute-1 sudo[224622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:48 compute-1 sudo[224622]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:48 compute-1 sudo[224647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 14:03:48 compute-1 sudo[224647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:48 compute-1 sudo[224647]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:49 compute-1 ceph-mon[81715]: pgmap v1073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:49 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:03:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:03:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:49.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:50 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:51 compute-1 ceph-mon[81715]: pgmap v1074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:53 compute-1 ceph-mon[81715]: pgmap v1075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:55 compute-1 podman[224690]: 2026-01-22 14:03:55.097880142 +0000 UTC m=+0.087935748 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 14:03:55 compute-1 ceph-mon[81715]: pgmap v1076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:55 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:55.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:55 compute-1 sudo[224718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:55 compute-1 sudo[224718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:55 compute-1 sudo[224718]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:55 compute-1 sudo[224743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:03:55 compute-1 sudo[224743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:55 compute-1 sudo[224743]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:57.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:57 compute-1 ceph-mon[81715]: pgmap v1077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:58 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:03:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:59.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:59 compute-1 ceph-mon[81715]: pgmap v1078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:59 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:59 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:00 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:01 compute-1 ceph-mon[81715]: pgmap v1079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:01 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:03 compute-1 ceph-mon[81715]: pgmap v1080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:04 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-1 ceph-mon[81715]: pgmap v1081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:05.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:06 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:07 compute-1 ceph-mon[81715]: pgmap v1082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:07 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:08 compute-1 podman[224768]: 2026-01-22 14:04:08.103000505 +0000 UTC m=+0.091359463 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:04:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:09 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:09.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:10 compute-1 ceph-mon[81715]: pgmap v1083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:10 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:10 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:10 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:12 compute-1 ceph-mon[81715]: pgmap v1084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:12 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:13 compute-1 ceph-mon[81715]: pgmap v1085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:13.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:16 compute-1 ceph-mon[81715]: pgmap v1086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:16 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:17 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:17 compute-1 ceph-mon[81715]: pgmap v1087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:17.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:04:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:04:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:04:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:04:18 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:04:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:04:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:19 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:19 compute-1 ceph-mon[81715]: pgmap v1088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:20 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:20 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:21 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:21 compute-1 ceph-mon[81715]: pgmap v1089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:21 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:23.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:23 compute-1 ceph-mon[81715]: pgmap v1090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:23 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:24 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:24 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:25 compute-1 ceph-mon[81715]: pgmap v1091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:25 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:26 compute-1 podman[224787]: 2026-01-22 14:04:26.090755369 +0000 UTC m=+0.085348169 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 14:04:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:27 compute-1 ceph-mon[81715]: pgmap v1092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:28 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:29 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:29 compute-1 ceph-mon[81715]: pgmap v1093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:29.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:30 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:30 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-1 ceph-mon[81715]: pgmap v1094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:32 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:32 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4110622290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:33 compute-1 ceph-mon[81715]: pgmap v1095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1738843088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:04:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.5 total, 600.0 interval
                                           Cumulative writes: 7297 writes, 27K keys, 7297 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7297 writes, 1549 syncs, 4.71 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 710 writes, 1494 keys, 710 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                           Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:04:34 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/835375088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2981244524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:34 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:35.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:35 compute-1 ceph-mon[81715]: pgmap v1096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:35 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:36 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:38 compute-1 ceph-mon[81715]: pgmap v1097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:38 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:39 compute-1 podman[224813]: 2026-01-22 14:04:39.060551512 +0000 UTC m=+0.051601305 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:04:39 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:39 compute-1 ceph-mon[81715]: pgmap v1098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:40 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:40 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:04:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:04:41 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:41 compute-1 ceph-mon[81715]: pgmap v1099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:41.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:42 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:42 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:43 compute-1 ceph-mon[81715]: pgmap v1100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:43 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:44 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:45 compute-1 ceph-mon[81715]: pgmap v1101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:45 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:45 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:47 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:47 compute-1 ceph-mon[81715]: pgmap v1102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.440 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.440 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:04:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:49 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-1 ceph-mon[81715]: pgmap v1103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:49 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:51 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:52 compute-1 ceph-mon[81715]: pgmap v1104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:53 compute-1 ceph-mon[81715]: pgmap v1105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:54 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:55 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:55 compute-1 ceph-mon[81715]: pgmap v1106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:55 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:56 compute-1 sudo[224832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-1 sudo[224832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-1 sudo[224832]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-1 sudo[224863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:04:56 compute-1 sudo[224863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-1 sudo[224863]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-1 podman[224856]: 2026-01-22 14:04:56.299843008 +0000 UTC m=+0.095278050 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:04:56 compute-1 sudo[224902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-1 sudo[224902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-1 sudo[224902]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-1 sudo[224931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:04:56 compute-1 sudo[224931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:56 compute-1 sudo[224931]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-1 sudo[224986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-1 sudo[224986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-1 sudo[224986]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-1 sudo[225011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:04:57 compute-1 sudo[225011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-1 sudo[225011]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-1 sudo[225036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:57 compute-1 sudo[225036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-1 sudo[225036]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-1 sudo[225061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 14:04:57 compute-1 sudo[225061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:57.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.444221307 +0000 UTC m=+0.045196078 container create 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:04:57 compute-1 systemd[1]: Started libpod-conmon-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope.
Jan 22 14:04:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:57 compute-1 ceph-mon[81715]: pgmap v1107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.424150548 +0000 UTC m=+0.025125339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:04:57 compute-1 systemd[1]: Started libcrun container.
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.537175833 +0000 UTC m=+0.138150624 container init 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.544188045 +0000 UTC m=+0.145162816 container start 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.547188857 +0000 UTC m=+0.148163658 container attach 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 14:04:57 compute-1 sweet_hugle[225143]: 167 167
Jan 22 14:04:57 compute-1 systemd[1]: libpod-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope: Deactivated successfully.
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.550851587 +0000 UTC m=+0.151826378 container died 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 14:04:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-cab389985ea1794aa58a3d52e87f91c7e1073325a57f831130d36db403b1f53a-merged.mount: Deactivated successfully.
Jan 22 14:04:57 compute-1 podman[225126]: 2026-01-22 14:04:57.591895622 +0000 UTC m=+0.192870393 container remove 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 14:04:57 compute-1 systemd[1]: libpod-conmon-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope: Deactivated successfully.
Jan 22 14:04:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:57 compute-1 podman[225165]: 2026-01-22 14:04:57.775313934 +0000 UTC m=+0.046844944 container create 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 14:04:57 compute-1 systemd[1]: Started libpod-conmon-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope.
Jan 22 14:04:57 compute-1 systemd[1]: Started libcrun container.
Jan 22 14:04:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 14:04:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 14:04:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 14:04:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 14:04:57 compute-1 podman[225165]: 2026-01-22 14:04:57.754278138 +0000 UTC m=+0.025809178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:04:57 compute-1 podman[225165]: 2026-01-22 14:04:57.858567614 +0000 UTC m=+0.130098634 container init 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:04:57 compute-1 podman[225165]: 2026-01-22 14:04:57.865576905 +0000 UTC m=+0.137107915 container start 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 14:04:57 compute-1 podman[225165]: 2026-01-22 14:04:57.869907505 +0000 UTC m=+0.141438515 container attach 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:04:58 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:59 compute-1 zealous_golick[225182]: [
Jan 22 14:04:59 compute-1 zealous_golick[225182]:     {
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "available": false,
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "ceph_device": false,
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "lsm_data": {},
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "lvs": [],
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "path": "/dev/sr0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "rejected_reasons": [
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "Has a FileSystem",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "Insufficient space (<5GB)"
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         ],
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         "sys_api": {
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "actuators": null,
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "device_nodes": "sr0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "devname": "sr0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "human_readable_size": "482.00 KB",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "id_bus": "ata",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "model": "QEMU DVD-ROM",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "nr_requests": "2",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "parent": "/dev/sr0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "partitions": {},
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "path": "/dev/sr0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "removable": "1",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "rev": "2.5+",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "ro": "0",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "rotational": "1",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "sas_address": "",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "sas_device_handle": "",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "scheduler_mode": "mq-deadline",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "sectors": 0,
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "sectorsize": "2048",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "size": 493568.0,
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "support_discard": "2048",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "type": "disk",
Jan 22 14:04:59 compute-1 zealous_golick[225182]:             "vendor": "QEMU"
Jan 22 14:04:59 compute-1 zealous_golick[225182]:         }
Jan 22 14:04:59 compute-1 zealous_golick[225182]:     }
Jan 22 14:04:59 compute-1 zealous_golick[225182]: ]
Jan 22 14:04:59 compute-1 systemd[1]: libpod-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Deactivated successfully.
Jan 22 14:04:59 compute-1 systemd[1]: libpod-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Consumed 1.297s CPU time.
Jan 22 14:04:59 compute-1 podman[225165]: 2026-01-22 14:04:59.150141664 +0000 UTC m=+1.421672694 container died 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:04:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e-merged.mount: Deactivated successfully.
Jan 22 14:04:59 compute-1 podman[225165]: 2026-01-22 14:04:59.212884581 +0000 UTC m=+1.484415591 container remove 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 14:04:59 compute-1 systemd[1]: libpod-conmon-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Deactivated successfully.
Jan 22 14:04:59 compute-1 sudo[225061]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:59 compute-1 ceph-mon[81715]: pgmap v1108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:59 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:04:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:00 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:00 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:01 compute-1 ceph-mon[81715]: pgmap v1109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:01 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:05:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:05:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:01.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:03.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:03 compute-1 ceph-mon[81715]: pgmap v1110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:04 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:04 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:05 compute-1 ceph-mon[81715]: pgmap v1111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:06 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:07 compute-1 sudo[226372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:07 compute-1 sudo[226372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:07 compute-1 sudo[226372]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:07 compute-1 sudo[226397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:05:07 compute-1 sudo[226397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:07 compute-1 sudo[226397]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:07 compute-1 ceph-mon[81715]: pgmap v1112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:07 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:10 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:10 compute-1 podman[226422]: 2026-01-22 14:05:10.125158532 +0000 UTC m=+0.098274523 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 14:05:11 compute-1 ceph-mon[81715]: pgmap v1113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:11 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:11 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:11 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:11 compute-1 ceph-mon[81715]: pgmap v1114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:12 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:13 compute-1 ceph-mon[81715]: pgmap v1115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:15 compute-1 ceph-mon[81715]: pgmap v1116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:15 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:15 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:17.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:17 compute-1 ceph-mon[81715]: pgmap v1117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:17 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:19 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:05:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:05:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:20 compute-1 ceph-mon[81715]: pgmap v1118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:20 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:20 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:21 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:21 compute-1 ceph-mon[81715]: pgmap v1119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:23.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:23 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:23 compute-1 ceph-mon[81715]: pgmap v1120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:23 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:25 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:25 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:26 compute-1 ceph-mon[81715]: pgmap v1121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:26 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:27 compute-1 podman[226443]: 2026-01-22 14:05:27.108979779 +0000 UTC m=+0.093842950 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:05:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:27 compute-1 ceph-mon[81715]: pgmap v1122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:27.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:28 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:29 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:29 compute-1 ceph-mon[81715]: pgmap v1123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:31 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:05:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5485 writes, 31K keys, 5485 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 5485 writes, 5485 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1833 writes, 9373 keys, 1833 commit groups, 1.0 writes per commit group, ingest: 16.85 MB, 0.03 MB/s
                                           Interval WAL: 1833 writes, 1833 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     40.7      0.81              0.10        16    0.051       0      0       0.0       0.0
                                             L6      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8    120.1    100.6      1.26              0.35        15    0.084     86K   7950       0.0       0.0
                                            Sum      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.8     73.0     77.1      2.07              0.45        31    0.067     86K   7950       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6     91.3     92.2      0.57              0.17        10    0.057     33K   2588       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    120.1    100.6      1.26              0.35        15    0.084     86K   7950       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     40.7      0.81              0.10        15    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.16 GB write, 0.09 MB/s write, 0.15 GB read, 0.08 MB/s read, 2.1 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 14.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(750,13.68 MB,4.50078%) FilterBlock(31,253.67 KB,0.081489%) IndexBlock(31,393.67 KB,0.126462%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:05:32 compute-1 ceph-mon[81715]: pgmap v1124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:33.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-1 ceph-mon[81715]: pgmap v1125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:33 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/330811639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:35 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1659360568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:35 compute-1 ceph-mon[81715]: pgmap v1126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3068488076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:35 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:35.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:35.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:36 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1439210580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:37 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-1 ceph-mon[81715]: pgmap v1127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:37 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:38 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:39 compute-1 ceph-mon[81715]: pgmap v1128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:39 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:39 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:40 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:41 compute-1 podman[226470]: 2026-01-22 14:05:41.064430917 +0000 UTC m=+0.051977395 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:05:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:41 compute-1 ceph-mon[81715]: pgmap v1129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:41 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:43 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:43 compute-1 ceph-mon[81715]: pgmap v1130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:44 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:45 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:45 compute-1 ceph-mon[81715]: pgmap v1131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:45 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:46 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:05:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:47 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:47 compute-1 ceph-mon[81715]: pgmap v1132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:49 compute-1 ceph-mon[81715]: pgmap v1133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:49 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:50 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:50 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:51 compute-1 ceph-mon[81715]: pgmap v1134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:53.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:53 compute-1 ceph-mon[81715]: pgmap v1135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:54 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:54 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:55 compute-1 ceph-mon[81715]: pgmap v1136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:55 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:57 compute-1 ceph-mon[81715]: pgmap v1137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:58 compute-1 podman[226489]: 2026-01-22 14:05:58.107003294 +0000 UTC m=+0.101434159 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:05:58 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:05:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:59.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:59 compute-1 ceph-mon[81715]: pgmap v1138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:59 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:59 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:00 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:00 compute-1 ceph-mon[81715]: pgmap v1139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 14:06:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:01 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:03 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:03 compute-1 ceph-mon[81715]: pgmap v1140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:03.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:04 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:04 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:05.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:05 compute-1 ceph-mon[81715]: pgmap v1141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:05 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:05 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.831384) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765831427, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2716, "num_deletes": 506, "total_data_size": 5078584, "memory_usage": 5159296, "flush_reason": "Manual Compaction"}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765853269, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29067, "largest_seqno": 31778, "table_properties": {"data_size": 3266581, "index_size": 5556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27698, "raw_average_key_size": 20, "raw_value_size": 3242643, "raw_average_value_size": 2389, "num_data_blocks": 243, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090600, "oldest_key_time": 1769090600, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 21946 microseconds, and 9223 cpu microseconds.
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.853328) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3276667 bytes OK
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.853353) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855106) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855125) EVENT_LOG_v1 {"time_micros": 1769090765855119, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855145) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5065282, prev total WAL file size 5065282, number of live WAL files 2.
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.856870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3199KB)], [57(8704KB)]
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765856957, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 12189581, "oldest_snapshot_seqno": -1}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6972 keys, 10360528 bytes, temperature: kUnknown
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765946804, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 10360528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10316156, "index_size": 25828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183838, "raw_average_key_size": 26, "raw_value_size": 10190947, "raw_average_value_size": 1461, "num_data_blocks": 1022, "num_entries": 6972, "num_filter_entries": 6972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.947138) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10360528 bytes
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.950196) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8002, records dropped: 1030 output_compression: NoCompression
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.950216) EVENT_LOG_v1 {"time_micros": 1769090765950207, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89939, "compaction_time_cpu_micros": 35262, "output_level": 6, "num_output_files": 1, "total_output_size": 10360528, "num_input_records": 8002, "num_output_records": 6972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765950888, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765953038, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.856637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:06 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:07 compute-1 ceph-mon[81715]: pgmap v1142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:07 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:08 compute-1 sudo[226516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-1 sudo[226516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226516]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:06:08 compute-1 sudo[226541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226541]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-1 sudo[226566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226566]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:06:08 compute-1 sudo[226591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226591]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-1 sudo[226637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226637]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:06:08 compute-1 sudo[226662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226662]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-1 sudo[226687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 sudo[226687]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-1 sudo[226712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:06:08 compute-1 sudo[226712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:09 compute-1 sudo[226712]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:09.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:10 compute-1 ceph-mon[81715]: pgmap v1143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:10 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:06:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:06:10 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:11 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:11 compute-1 ceph-mon[81715]: pgmap v1144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:11.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:06:12 compute-1 podman[226766]: 2026-01-22 14:06:12.065060504 +0000 UTC m=+0.054491163 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:06:12 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:13.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:13.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:14 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:14 compute-1 ceph-mon[81715]: pgmap v1145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 14:06:14 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:15 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:15 compute-1 ceph-mon[81715]: pgmap v1146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:15 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:15.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:16 compute-1 sudo[226783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:16 compute-1 sudo[226783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:16 compute-1 sudo[226783]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:16 compute-1 sudo[226808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:06:16 compute-1 sudo[226808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:16 compute-1 sudo[226808]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:17.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:17 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:17 compute-1 ceph-mon[81715]: pgmap v1147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:17.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:06:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:06:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:06:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:06:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:06:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:06:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:20 compute-1 ceph-mon[81715]: pgmap v1148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:20 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:20 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:21 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:21 compute-1 ceph-mon[81715]: pgmap v1149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:21.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:21.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:22 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:22 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:23.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:23.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:24 compute-1 ceph-mon[81715]: pgmap v1150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:24 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:25 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:25 compute-1 ceph-mon[81715]: pgmap v1151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:25 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:25.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:27 compute-1 ceph-mon[81715]: pgmap v1152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:28 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:28 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:29 compute-1 podman[226833]: 2026-01-22 14:06:29.139841447 +0000 UTC m=+0.125934614 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 14:06:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:29.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:30 compute-1 ceph-mon[81715]: pgmap v1153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:30 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:31 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:31 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:31 compute-1 ceph-mon[81715]: pgmap v1154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:31.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:32 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:32 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:33.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:34 compute-1 ceph-mon[81715]: pgmap v1155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:34 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:35 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:35 compute-1 ceph-mon[81715]: pgmap v1156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:35 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3190855873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/539268978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:36 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/478780440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1569454389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:37 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:37 compute-1 ceph-mon[81715]: pgmap v1157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:37 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:38 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:39.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:39 compute-1 ceph-mon[81715]: pgmap v1158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:39 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:39 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:40 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:41.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:42 compute-1 ceph-mon[81715]: pgmap v1159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:42 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:43 compute-1 podman[226859]: 2026-01-22 14:06:43.065374843 +0000 UTC m=+0.054847147 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 14:06:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:43.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:44 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:44 compute-1 ceph-mon[81715]: pgmap v1160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:45 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:45 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:45 compute-1 ceph-mon[81715]: pgmap v1161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:45 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:45.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:46 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.442 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.442 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:06:47 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:47 compute-1 ceph-mon[81715]: pgmap v1162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:48 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:48 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:49.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:49 compute-1 ceph-mon[81715]: pgmap v1163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:49 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:50 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:50 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:51.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:52 compute-1 ceph-mon[81715]: pgmap v1164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:52 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:53 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-1 ceph-mon[81715]: pgmap v1165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:53 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:54 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:54 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:55.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:55 compute-1 ceph-mon[81715]: pgmap v1166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:55 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:56 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:58 compute-1 ceph-mon[81715]: pgmap v1167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:58 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:59 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:59 compute-1 ceph-mon[81715]: pgmap v1168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 14:06:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:59.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:06:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:00 compute-1 podman[226879]: 2026-01-22 14:07:00.094112677 +0000 UTC m=+0.082748348 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:07:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:00 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:07:00 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:01 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:01 compute-1 ceph-mon[81715]: pgmap v1169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 14:07:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:01.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:02 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:07:02 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:03 compute-1 ceph-mon[81715]: pgmap v1170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 14:07:03 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:04 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:04 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:05 compute-1 ceph-mon[81715]: pgmap v1171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 87 op/s
Jan 22 14:07:05 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:07 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:08 compute-1 ceph-mon[81715]: pgmap v1172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:07:08 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:09 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:09 compute-1 ceph-mon[81715]: pgmap v1173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:07:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:09.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:10 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:10 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:11 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:11 compute-1 ceph-mon[81715]: pgmap v1174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 22 14:07:11 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 14:07:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:11.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 14:07:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:11.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:12 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:13 compute-1 ceph-mon[81715]: pgmap v1175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 14:07:13 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:14 compute-1 podman[226906]: 2026-01-22 14:07:14.05629758 +0000 UTC m=+0.051580108 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 14:07:14 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:15 compute-1 ceph-mon[81715]: pgmap v1176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 14:07:15 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:15 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:16 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:16 compute-1 sudo[226925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:16 compute-1 sudo[226925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:16 compute-1 sudo[226925]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-1 sudo[226950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:07:17 compute-1 sudo[226950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-1 sudo[226950]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-1 sudo[226975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:17 compute-1 sudo[226975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-1 sudo[226975]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-1 sudo[227000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:07:17 compute-1 sudo[227000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:17 compute-1 sudo[227000]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:18 compute-1 ceph-mon[81715]: pgmap v1177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:07:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:07:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:07:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:19 compute-1 ceph-mon[81715]: pgmap v1178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:20 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-1 ceph-mon[81715]: pgmap v1179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:07:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:07:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:22 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-1 ceph-mon[81715]: pgmap v1180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:23.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:25 compute-1 ceph-mon[81715]: pgmap v1181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:25 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:25.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:27 compute-1 ceph-mon[81715]: pgmap v1182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:29 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:29 compute-1 sudo[227056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:29 compute-1 sudo[227056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:29 compute-1 sudo[227056]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:29 compute-1 sudo[227081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:07:29 compute-1 sudo[227081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:29 compute-1 sudo[227081]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:29.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:30 compute-1 ceph-mon[81715]: pgmap v1183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:30 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:31 compute-1 podman[227106]: 2026-01-22 14:07:31.095607459 +0000 UTC m=+0.086430487 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 22 14:07:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:32 compute-1 ceph-mon[81715]: pgmap v1184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/113320093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:34 compute-1 ceph-mon[81715]: pgmap v1185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:34 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:34 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/438682208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:35 compute-1 ceph-mon[81715]: pgmap v1186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3109525077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/237134865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.005000135s ======
Jan 22 14:07:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Jan 22 14:07:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:38 compute-1 ceph-mon[81715]: pgmap v1187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:39.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:40 compute-1 ceph-mon[81715]: pgmap v1188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:40 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:41 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:42 compute-1 ceph-mon[81715]: pgmap v1189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:42 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:44 compute-1 ceph-mon[81715]: pgmap v1190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:44 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:44 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:45 compute-1 podman[227133]: 2026-01-22 14:07:45.065247799 +0000 UTC m=+0.057922901 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:07:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:45.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:45.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:46 compute-1 ceph-mon[81715]: pgmap v1191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:07:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:47.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:47.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:49 compute-1 ceph-mon[81715]: pgmap v1192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:49 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:49.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:50 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:50 compute-1 ceph-mon[81715]: pgmap v1193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:50 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:51 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:51.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:52 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:52 compute-1 ceph-mon[81715]: pgmap v1194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:53 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:53.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:54 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:54 compute-1 ceph-mon[81715]: pgmap v1195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:55 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:55.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:56 compute-1 ceph-mon[81715]: pgmap v1196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:56 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:57.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:58 compute-1 ceph-mon[81715]: pgmap v1197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:58 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:59.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:59 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:59 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:07:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:00 compute-1 ceph-mon[81715]: pgmap v1198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:00 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:01.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:02 compute-1 podman[227154]: 2026-01-22 14:08:02.091622161 +0000 UTC m=+0.086568322 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:08:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:02 compute-1 ceph-mon[81715]: pgmap v1199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:03 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:04 compute-1 ceph-mon[81715]: pgmap v1200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:05 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:06 compute-1 ceph-mon[81715]: pgmap v1201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:06 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:07.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:09 compute-1 ceph-mon[81715]: pgmap v1202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:09 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:09.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:11 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:11 compute-1 ceph-mon[81715]: pgmap v1203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:11 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:11.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:12 compute-1 ceph-mon[81715]: pgmap v1204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:13 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:13.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:13.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:14 compute-1 ceph-mon[81715]: pgmap v1205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:15 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:15.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:16 compute-1 podman[227180]: 2026-01-22 14:08:16.05096293 +0000 UTC m=+0.045525042 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:08:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:16 compute-1 ceph-mon[81715]: pgmap v1206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:17 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:17.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:08:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:08:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:08:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:08:18 compute-1 ceph-mon[81715]: pgmap v1207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:08:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:08:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:19 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:19.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:20 compute-1 ceph-mon[81715]: pgmap v1208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:21.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:23 compute-1 ceph-mon[81715]: pgmap v1209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:24 compute-1 ceph-mon[81715]: pgmap v1210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:25 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:26 compute-1 ceph-mon[81715]: pgmap v1211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:28 compute-1 ceph-mon[81715]: pgmap v1212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:29 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:30.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:30 compute-1 ceph-mon[81715]: pgmap v1213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:30 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:30 compute-1 sudo[227200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:30 compute-1 sudo[227200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-1 sudo[227200]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-1 sudo[227225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:08:30 compute-1 sudo[227225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-1 sudo[227225]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-1 sudo[227250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:30 compute-1 sudo[227250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-1 sudo[227250]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-1 sudo[227275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:08:30 compute-1 sudo[227275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-1 sudo[227275]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:32.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:32 compute-1 ceph-mon[81715]: pgmap v1214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:33 compute-1 podman[227331]: 2026-01-22 14:08:33.120867467 +0000 UTC m=+0.115640053 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 14:08:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:34 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:08:34 compute-1 ceph-mon[81715]: pgmap v1215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:08:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1084370533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:35 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:36 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:36 compute-1 ceph-mon[81715]: pgmap v1216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2271794858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:37.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:38 compute-1 ceph-mon[81715]: pgmap v1217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3884516686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:39 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1175285928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:39.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.971756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919972095, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2540, "num_deletes": 510, "total_data_size": 4637302, "memory_usage": 4704832, "flush_reason": "Manual Compaction"}
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919986062, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2305866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31784, "largest_seqno": 34318, "table_properties": {"data_size": 2297650, "index_size": 4006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26045, "raw_average_key_size": 20, "raw_value_size": 2276708, "raw_average_value_size": 1819, "num_data_blocks": 172, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090766, "oldest_key_time": 1769090766, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14355 microseconds, and 6270 cpu microseconds.
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.986117) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2305866 bytes OK
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.986139) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987407) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987419) EVENT_LOG_v1 {"time_micros": 1769090919987415, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4624724, prev total WAL file size 4686344, number of live WAL files 2.
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.988965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2251KB)], [60(10117KB)]
Jan 22 14:08:39 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919989031, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12666394, "oldest_snapshot_seqno": -1}
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 7230 keys, 9297924 bytes, temperature: kUnknown
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920056563, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 9297924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9254412, "index_size": 24328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 191668, "raw_average_key_size": 26, "raw_value_size": 9127182, "raw_average_value_size": 1262, "num_data_blocks": 948, "num_entries": 7230, "num_filter_entries": 7230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.056910) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9297924 bytes
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.058747) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.1 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.0) OK, records in: 8223, records dropped: 993 output_compression: NoCompression
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.058790) EVENT_LOG_v1 {"time_micros": 1769090920058771, "job": 36, "event": "compaction_finished", "compaction_time_micros": 67690, "compaction_time_cpu_micros": 26929, "output_level": 6, "num_output_files": 1, "total_output_size": 9297924, "num_input_records": 8223, "num_output_records": 7230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920060006, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920062632, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.988894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:40 compute-1 sudo[227358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:40 compute-1 sudo[227358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:40 compute-1 sudo[227358]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:40 compute-1 sudo[227383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:08:40 compute-1 sudo[227383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:40 compute-1 sudo[227383]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:40 compute-1 ceph-mon[81715]: pgmap v1218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:40 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:41 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:41.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:42 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:42 compute-1 ceph-mon[81715]: pgmap v1219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:43.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:44 compute-1 ceph-mon[81715]: pgmap v1220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:44 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:45 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:45 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:45.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:46.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:46 compute-1 ceph-mon[81715]: pgmap v1221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:47 compute-1 podman[227408]: 2026-01-22 14:08:47.071873721 +0000 UTC m=+0.056719208 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:08:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:48.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:48 compute-1 ceph-mon[81715]: pgmap v1222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:48 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:49.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:50 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:50 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:51 compute-1 ceph-mon[81715]: pgmap v1223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:51 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:52 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:52 compute-1 ceph-mon[81715]: pgmap v1224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:53 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.123195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933123229, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 431, "num_deletes": 251, "total_data_size": 408278, "memory_usage": 417608, "flush_reason": "Manual Compaction"}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933127206, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 268216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34323, "largest_seqno": 34749, "table_properties": {"data_size": 265866, "index_size": 450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6293, "raw_average_key_size": 19, "raw_value_size": 260953, "raw_average_value_size": 795, "num_data_blocks": 20, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090919, "oldest_key_time": 1769090919, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4057 microseconds, and 1525 cpu microseconds.
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127254) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 268216 bytes OK
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127270) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128634) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128647) EVENT_LOG_v1 {"time_micros": 1769090933128643, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128679) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 405526, prev total WAL file size 405526, number of live WAL files 2.
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(261KB)], [63(9080KB)]
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933129059, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 9566140, "oldest_snapshot_seqno": -1}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 7046 keys, 7847515 bytes, temperature: kUnknown
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933171626, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 7847515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806408, "index_size": 22371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 188633, "raw_average_key_size": 26, "raw_value_size": 7683390, "raw_average_value_size": 1090, "num_data_blocks": 861, "num_entries": 7046, "num_filter_entries": 7046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.171986) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 7847515 bytes
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.173250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.0 rd, 183.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.9) write-amplify(29.3) OK, records in: 7558, records dropped: 512 output_compression: NoCompression
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.173273) EVENT_LOG_v1 {"time_micros": 1769090933173262, "job": 38, "event": "compaction_finished", "compaction_time_micros": 42698, "compaction_time_cpu_micros": 20101, "output_level": 6, "num_output_files": 1, "total_output_size": 7847515, "num_input_records": 7558, "num_output_records": 7046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933173460, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933175698, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:54 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:54 compute-1 ceph-mon[81715]: pgmap v1225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:55 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:55.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:56 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:56 compute-1 ceph-mon[81715]: pgmap v1226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:57.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:58.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:58 compute-1 ceph-mon[81715]: pgmap v1227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:58 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:08:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:59.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:59 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:59 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:00.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:00 compute-1 ceph-mon[81715]: pgmap v1228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:00 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:01.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:01 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:02.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:02 compute-1 ceph-mon[81715]: pgmap v1229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:04 compute-1 podman[227427]: 2026-01-22 14:09:04.122512191 +0000 UTC m=+0.108721665 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:09:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:04 compute-1 ceph-mon[81715]: pgmap v1230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:05 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:06 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:06 compute-1 ceph-mon[81715]: pgmap v1231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:07 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:08 compute-1 ceph-mon[81715]: pgmap v1232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:09 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:10 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:10 compute-1 ceph-mon[81715]: pgmap v1233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:10 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:11 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:12 compute-1 ceph-mon[81715]: pgmap v1234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:13 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:14 compute-1 ceph-mon[81715]: pgmap v1235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:15 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:16 compute-1 ceph-mon[81715]: pgmap v1236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:17 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:18 compute-1 podman[227454]: 2026-01-22 14:09:18.056445897 +0000 UTC m=+0.051748182 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:09:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:18.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:18 compute-1 ceph-mon[81715]: pgmap v1237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:20 compute-1 ceph-mon[81715]: pgmap v1238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:20 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:21.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:22 compute-1 ceph-mon[81715]: pgmap v1239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:22 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:23.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:24 compute-1 ceph-mon[81715]: pgmap v1240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:24 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:27 compute-1 ceph-mon[81715]: pgmap v1241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:28 compute-1 ceph-mon[81715]: pgmap v1242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:29 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:30 compute-1 ceph-mon[81715]: pgmap v1243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:30 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:32 compute-1 ceph-mon[81715]: pgmap v1244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:33.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:34 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:34 compute-1 ceph-mon[81715]: pgmap v1245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:35 compute-1 podman[227474]: 2026-01-22 14:09:35.086571171 +0000 UTC m=+0.080688541 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 14:09:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:35 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:35.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:36 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:36 compute-1 ceph-mon[81715]: pgmap v1246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2613868033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1472558186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:37.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:38 compute-1 ceph-mon[81715]: pgmap v1247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1465988269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:39 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1883155954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:40 compute-1 sudo[227501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:40 compute-1 sudo[227501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-1 sudo[227501]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-1 sudo[227526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:09:40 compute-1 sudo[227526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-1 sudo[227526]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-1 sudo[227551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:40 compute-1 sudo[227551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-1 sudo[227551]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-1 sudo[227576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:09:40 compute-1 sudo[227576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:40 compute-1 ceph-mon[81715]: pgmap v1248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:40 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:41 compute-1 sudo[227576]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:41 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:09:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:09:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:42 compute-1 ceph-mon[81715]: pgmap v1249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:42 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:44 compute-1 ceph-mon[81715]: pgmap v1250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:44 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:45 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:45 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:45.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:46 compute-1 ceph-mon[81715]: pgmap v1251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:09:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:48.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:48 compute-1 sudo[227631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:48 compute-1 sudo[227631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:48 compute-1 sudo[227631]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:48 compute-1 sudo[227662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:09:48 compute-1 sudo[227662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:48 compute-1 sudo[227662]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:48 compute-1 podman[227655]: 2026-01-22 14:09:48.609721878 +0000 UTC m=+0.081177945 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 14:09:48 compute-1 ceph-mon[81715]: pgmap v1252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:48 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:49.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:49 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:49 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:50 compute-1 ceph-mon[81715]: pgmap v1253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:50 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:51.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:52 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:53 compute-1 ceph-mon[81715]: pgmap v1254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:53 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:53.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:54 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:54 compute-1 ceph-mon[81715]: pgmap v1255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:55 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:56.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:56 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:56 compute-1 ceph-mon[81715]: pgmap v1256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:57.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:59 compute-1 ceph-mon[81715]: pgmap v1257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:59 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:09:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:59.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:00.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:01 compute-1 ceph-mon[81715]: pgmap v1258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:01 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:01 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 14:10:01 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 14:10:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:01.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:02.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-1 ceph-mon[81715]: pgmap v1259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:05 compute-1 ceph-mon[81715]: pgmap v1260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:05 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:06 compute-1 podman[227699]: 2026-01-22 14:10:06.112698623 +0000 UTC m=+0.106596778 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:10:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:06 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:06 compute-1 ceph-mon[81715]: pgmap v1261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:07 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:08 compute-1 ceph-mon[81715]: pgmap v1262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:09 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:10 compute-1 ceph-mon[81715]: pgmap v1263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:10 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:10 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:11 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:11 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1847141595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4089952682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2985836776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:12 compute-1 ceph-mon[81715]: pgmap v1264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:12 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:12.850 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:10:12 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:12.852 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:10:13 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:14.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:14 compute-1 ceph-mon[81715]: pgmap v1265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:15 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:15.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:16.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:16 compute-1 ceph-mon[81715]: pgmap v1266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:17.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:18 compute-1 ceph-mon[81715]: pgmap v1267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:10:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:10:18 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:18.854 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:19 compute-1 podman[227726]: 2026-01-22 14:10:19.074230186 +0000 UTC m=+0.059865863 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:10:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:19.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:20 compute-1 ceph-mon[81715]: pgmap v1268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:20 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:23 compute-1 ceph-mon[81715]: pgmap v1269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:23.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-1 ceph-mon[81715]: pgmap v1270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:25 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:25.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:26 compute-1 ceph-mon[81715]: pgmap v1271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:28 compute-1 ceph-mon[81715]: pgmap v1272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:29.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:30.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:31 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:10:31 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:10:31 compute-1 ceph-mon[81715]: pgmap v1273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:31 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:32 compute-1 ceph-mon[81715]: pgmap v1274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:33.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:34 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:34 compute-1 ceph-mon[81715]: pgmap v1275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1327466574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:35 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:36 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:36 compute-1 ceph-mon[81715]: pgmap v1276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/970741415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2036072568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:37 compute-1 podman[227746]: 2026-01-22 14:10:37.090462377 +0000 UTC m=+0.080525927 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:10:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:37.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/187506382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:39 compute-1 ceph-mon[81715]: pgmap v1277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:39 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2167703012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:39.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:40 compute-1 ceph-mon[81715]: pgmap v1278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2822757333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:41 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:41 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:41.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:42 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:42 compute-1 ceph-mon[81715]: pgmap v1279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 14:10:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:43.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:44 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:44 compute-1 ceph-mon[81715]: pgmap v1280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 14:10:45 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:45 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:45.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:46 compute-1 ceph-mon[81715]: pgmap v1281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.446 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.446 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:48.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:48 compute-1 sudo[227770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:48 compute-1 sudo[227770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:48 compute-1 sudo[227770]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:48 compute-1 ceph-mon[81715]: pgmap v1282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:48 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/61830410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:49 compute-1 sudo[227795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:10:49 compute-1 sudo[227795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-1 sudo[227795]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-1 sudo[227820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:49 compute-1 sudo[227820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-1 sudo[227820]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-1 sudo[227851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:10:49 compute-1 sudo[227851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-1 podman[227844]: 2026-01-22 14:10:49.178612157 +0000 UTC m=+0.056291991 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:10:49 compute-1 sudo[227851]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:50 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1226628873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1368713803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3062574627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:10:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:10:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:50.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:51 compute-1 ceph-mon[81715]: pgmap v1283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:51 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:51 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:52 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:52 compute-1 ceph-mon[81715]: pgmap v1284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:52.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:53 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:54 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:54 compute-1 ceph-mon[81715]: pgmap v1285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 22 14:10:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:55 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 14:10:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:56 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:10:56 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:56 compute-1 ceph-mon[81715]: pgmap v1286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 22 14:10:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:56.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:57 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:57 compute-1 sudo[227921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:57 compute-1 sudo[227921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:57 compute-1 sudo[227921]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:57 compute-1 sudo[227946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:10:57 compute-1 sudo[227946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:57 compute-1 sudo[227946]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:10:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:58 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:58 compute-1 ceph-mon[81715]: pgmap v1287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Jan 22 14:10:59 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:00 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:00 compute-1 ceph-mon[81715]: pgmap v1288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 293 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 22 14:11:00 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:01 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:02 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:02 compute-1 ceph-mon[81715]: pgmap v1289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 14:11:03 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:04 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:04 compute-1 ceph-mon[81715]: pgmap v1290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 14:11:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:06 compute-1 ceph-mon[81715]: pgmap v1291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 235 op/s
Jan 22 14:11:06 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:07 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:07 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3749524116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:08 compute-1 podman[227971]: 2026-01-22 14:11:08.09963682 +0000 UTC m=+0.085941281 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 14:11:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:08 compute-1 ceph-mon[81715]: pgmap v1292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 14:11:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:09 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:10 compute-1 ceph-mon[81715]: pgmap v1293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 14:11:10 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:10 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:11 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:12 compute-1 ceph-mon[81715]: pgmap v1294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Jan 22 14:11:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:12 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4163491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:13 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:14 compute-1 ceph-mon[81715]: pgmap v1295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 144 KiB/s rd, 1.1 MiB/s wr, 59 op/s
Jan 22 14:11:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:15 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:15.586 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:11:15 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:15.587 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:11:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:15 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:16 compute-1 ceph-mon[81715]: pgmap v1296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 170 KiB/s rd, 1.7 MiB/s wr, 99 op/s
Jan 22 14:11:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:16 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1231319645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:18 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:18.589 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:19 compute-1 ceph-mon[81715]: pgmap v1297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.6 MiB/s wr, 69 op/s
Jan 22 14:11:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:11:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:11:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:20 compute-1 podman[227997]: 2026-01-22 14:11:20.063597646 +0000 UTC m=+0.053160798 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 14:11:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:20 compute-1 ceph-mon[81715]: pgmap v1298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 14:11:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:21 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:22 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:22 compute-1 ceph-mon[81715]: pgmap v1299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 14:11:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-1 ceph-mon[81715]: pgmap v1300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 14:11:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:25 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:26 compute-1 ceph-mon[81715]: pgmap v1301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 14:11:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:28 compute-1 ceph-mon[81715]: pgmap v1302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 14:11:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:29 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:30 compute-1 ceph-mon[81715]: pgmap v1303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 14:11:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:30 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:33 compute-1 ceph-mon[81715]: pgmap v1304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:34 compute-1 ceph-mon[81715]: pgmap v1305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:36.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:36 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:36 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:36 compute-1 ceph-mon[81715]: pgmap v1306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:36.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/744848052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:38 compute-1 ceph-mon[81715]: pgmap v1307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3987265266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:39 compute-1 podman[228016]: 2026-01-22 14:11:39.08962493 +0000 UTC m=+0.079806569 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:11:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:39 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1569152983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:40.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:40 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:40 compute-1 ceph-mon[81715]: pgmap v1308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Jan 22 14:11:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2149985450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:41 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:41 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/32100446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:42.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:42 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:42 compute-1 ceph-mon[81715]: pgmap v1309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 14:11:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/433737356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:43 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:44 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:44 compute-1 ceph-mon[81715]: pgmap v1310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 14:11:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3018484842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:11:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:45 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4002379441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:11:45 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.477204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105477256, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2467, "num_deletes": 251, "total_data_size": 4800566, "memory_usage": 4878776, "flush_reason": "Manual Compaction"}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105496457, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3142489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34754, "largest_seqno": 37216, "table_properties": {"data_size": 3133257, "index_size": 5342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23921, "raw_average_key_size": 21, "raw_value_size": 3112819, "raw_average_value_size": 2796, "num_data_blocks": 230, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090934, "oldest_key_time": 1769090934, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 19289 microseconds, and 9213 cpu microseconds.
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.496501) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3142489 bytes OK
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.496521) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498866) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498884) EVENT_LOG_v1 {"time_micros": 1769091105498878, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498903) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 4789274, prev total WAL file size 4789274, number of live WAL files 2.
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3068KB)], [66(7663KB)]
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105500571, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10990004, "oldest_snapshot_seqno": -1}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 7644 keys, 9277662 bytes, temperature: kUnknown
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105560407, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9277662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9232017, "index_size": 25437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203092, "raw_average_key_size": 26, "raw_value_size": 9097871, "raw_average_value_size": 1190, "num_data_blocks": 983, "num_entries": 7644, "num_filter_entries": 7644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.560970) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9277662 bytes
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.562101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 154.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8159, records dropped: 515 output_compression: NoCompression
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.562118) EVENT_LOG_v1 {"time_micros": 1769091105562109, "job": 40, "event": "compaction_finished", "compaction_time_micros": 59927, "compaction_time_cpu_micros": 25016, "output_level": 6, "num_output_files": 1, "total_output_size": 9277662, "num_input_records": 8159, "num_output_records": 7644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105562875, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105564456, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:46.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:46 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:46 compute-1 ceph-mon[81715]: pgmap v1311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 14:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.447 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.447 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:47 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:48 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:48 compute-1 ceph-mon[81715]: pgmap v1312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 14:11:49 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:50 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:50 compute-1 ceph-mon[81715]: pgmap v1313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 91 op/s
Jan 22 14:11:50 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:51 compute-1 podman[228042]: 2026-01-22 14:11:51.063649151 +0000 UTC m=+0.054771091 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:11:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:51 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:51 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:52 compute-1 ceph-mon[81715]: pgmap v1314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 22 14:11:52 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:53 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:54.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:54 compute-1 ceph-mon[81715]: pgmap v1315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:11:54 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2093419999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:55 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:55 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:56.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:56 compute-1 ceph-mon[81715]: pgmap v1316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 22 14:11:56 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:57 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:57 compute-1 sudo[228062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:57 compute-1 sudo[228062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:57 compute-1 sudo[228062]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-1 sudo[228087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:11:58 compute-1 sudo[228087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-1 sudo[228087]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-1 sudo[228112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:58 compute-1 sudo[228112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-1 sudo[228112]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:58.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:58 compute-1 sudo[228137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:11:58 compute-1 sudo[228137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:11:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:58 compute-1 sudo[228137]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-1 ceph-mon[81715]: pgmap v1317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 14:11:58 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:59 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:11:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:12:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:00.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:00 compute-1 ceph-mon[81715]: pgmap v1318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 14:12:00 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1699627580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:00 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:02 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:02.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:03 compute-1 ceph-mon[81715]: pgmap v1319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 589 KiB/s rd, 13 KiB/s wr, 51 op/s
Jan 22 14:12:03 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:04 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:04 compute-1 ceph-mon[81715]: pgmap v1320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:12:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:05 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:06 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:06 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:06 compute-1 ceph-mon[81715]: pgmap v1321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 22 14:12:06 compute-1 sudo[228195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:06 compute-1 sudo[228195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:06 compute-1 sudo[228195]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:06 compute-1 sudo[228220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:12:06 compute-1 sudo[228220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:06 compute-1 sudo[228220]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:07 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:12:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:12:07 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:08.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:08 compute-1 ceph-mon[81715]: pgmap v1322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:08 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:09 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:10.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:10.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:10 compute-1 podman[228245]: 2026-01-22 14:12:10.516542498 +0000 UTC m=+0.083403364 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:12:10 compute-1 ceph-mon[81715]: pgmap v1323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:10 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:10 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:11 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:12.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:12.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:12 compute-1 ceph-mon[81715]: pgmap v1324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:12 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:13 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:14 compute-1 ceph-mon[81715]: pgmap v1325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:14 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:15 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:15 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:16.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:17 compute-1 ceph-mon[81715]: pgmap v1326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:17 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:12:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:12:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:12:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:12:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:18 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:18 compute-1 ceph-mon[81715]: pgmap v1327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:12:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:12:19 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:19 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:21 compute-1 ceph-mon[81715]: pgmap v1328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:21 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:21 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:22 compute-1 podman[228272]: 2026-01-22 14:12:22.061943188 +0000 UTC m=+0.054620416 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:12:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:22 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:22 compute-1 ceph-mon[81715]: pgmap v1329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:23 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:23 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1692563106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:24 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:24 compute-1 ceph-mon[81715]: pgmap v1330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:24 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:25 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:25 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:26 compute-1 ceph-mon[81715]: pgmap v1331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:26 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:28 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:29 compute-1 ceph-mon[81715]: pgmap v1332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:29 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:30.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:31 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:31 compute-1 ceph-mon[81715]: pgmap v1333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 223 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 391 KiB/s wr, 1 op/s
Jan 22 14:12:31 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:31 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:32 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:32 compute-1 ceph-mon[81715]: pgmap v1334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:33 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:34.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:35 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:35 compute-1 ceph-mon[81715]: pgmap v1335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:35 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:36 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:36 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:36 compute-1 ceph-mon[81715]: pgmap v1336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:37 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:38.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:38 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-1 ceph-mon[81715]: pgmap v1337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:38 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:39 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:40 compute-1 ceph-mon[81715]: pgmap v1338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4207568101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2193478577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:40 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:40 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:41 compute-1 podman[228292]: 2026-01-22 14:12:41.098490911 +0000 UTC m=+0.085338106 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:12:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3007474767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:42 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3654665343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:43 compute-1 ceph-mon[81715]: pgmap v1339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:12:43 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:44 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:44 compute-1 ceph-mon[81715]: pgmap v1340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:45 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:46 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:46 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:46 compute-1 ceph-mon[81715]: pgmap v1341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:47 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:48 compute-1 ceph-mon[81715]: pgmap v1342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:48 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:49 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:50 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:50 compute-1 ceph-mon[81715]: pgmap v1343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:51 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:51 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:52 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:52 compute-1 ceph-mon[81715]: pgmap v1344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:53 compute-1 podman[228320]: 2026-01-22 14:12:53.076589142 +0000 UTC m=+0.060967826 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:12:53 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:54 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:54 compute-1 ceph-mon[81715]: pgmap v1345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:55 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:56.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:56 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:56 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:56 compute-1 ceph-mon[81715]: pgmap v1346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:57 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:57 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:58.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:12:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:58 compute-1 ceph-mon[81715]: pgmap v1347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:58 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:12:59 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:00 compute-1 ceph-mon[81715]: pgmap v1348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:00 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:00 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:02 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:03 compute-1 ceph-mon[81715]: pgmap v1349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:03 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:04 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:05 compute-1 ceph-mon[81715]: pgmap v1350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:05 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:06 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:06 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:06 compute-1 ceph-mon[81715]: pgmap v1351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:07 compute-1 sudo[228341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:07 compute-1 sudo[228341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-1 sudo[228341]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-1 sudo[228366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:13:07 compute-1 sudo[228366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-1 sudo[228366]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-1 sudo[228391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:07 compute-1 sudo[228391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-1 sudo[228391]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-1 sudo[228416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:13:07 compute-1 sudo[228416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-1 sudo[228416]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:08 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:08 compute-1 ceph-mon[81715]: pgmap v1352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:08 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:09 compute-1 sshd-session[228473]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 14:13:09 compute-1 sshd-session[228473]: Connection reset by 176.120.22.52 port 55887
Jan 22 14:13:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:10.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:10 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:10 compute-1 ceph-mon[81715]: pgmap v1353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:13:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:13:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:11 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:11 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:13:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:13:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:13:11 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:12 compute-1 podman[228474]: 2026-01-22 14:13:12.097612579 +0000 UTC m=+0.088102269 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 14:13:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:13 compute-1 ceph-mon[81715]: pgmap v1354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:13 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:14 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:14.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:15 compute-1 ceph-mon[81715]: pgmap v1355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:15 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.722166) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195722269, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1411, "num_deletes": 256, "total_data_size": 2584745, "memory_usage": 2613200, "flush_reason": "Manual Compaction"}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195735547, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1687738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37221, "largest_seqno": 38627, "table_properties": {"data_size": 1682028, "index_size": 2850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14642, "raw_average_key_size": 20, "raw_value_size": 1669526, "raw_average_value_size": 2351, "num_data_blocks": 124, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091106, "oldest_key_time": 1769091106, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 13415 microseconds, and 7489 cpu microseconds.
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.735610) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1687738 bytes OK
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.735636) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737447) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737469) EVENT_LOG_v1 {"time_micros": 1769091195737461, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737491) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2577840, prev total WAL file size 2577840, number of live WAL files 2.
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.738696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1648KB)], [69(9060KB)]
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195738762, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 10965400, "oldest_snapshot_seqno": -1}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7829 keys, 10801710 bytes, temperature: kUnknown
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195803424, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10801710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10753504, "index_size": 27550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 208555, "raw_average_key_size": 26, "raw_value_size": 10614726, "raw_average_value_size": 1355, "num_data_blocks": 1068, "num_entries": 7829, "num_filter_entries": 7829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.803789) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10801710 bytes
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.805025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 166.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 8354, records dropped: 525 output_compression: NoCompression
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.805047) EVENT_LOG_v1 {"time_micros": 1769091195805036, "job": 42, "event": "compaction_finished", "compaction_time_micros": 64763, "compaction_time_cpu_micros": 25199, "output_level": 6, "num_output_files": 1, "total_output_size": 10801710, "num_input_records": 8354, "num_output_records": 7829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195805794, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195807764, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.738594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:15 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:16.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:16 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:16 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:16 compute-1 ceph-mon[81715]: pgmap v1356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:17 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:17 compute-1 sudo[228500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:17 compute-1 sudo[228500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:17 compute-1 sudo[228500]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:17 compute-1 sudo[228525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:13:17 compute-1 sudo[228525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:17 compute-1 sudo[228525]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:18 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:18 compute-1 ceph-mon[81715]: pgmap v1357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:13:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:13:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:19 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:20 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:20 compute-1 ceph-mon[81715]: pgmap v1358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:21 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:21 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:22 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:22 compute-1 ceph-mon[81715]: pgmap v1359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:23 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:24 compute-1 podman[228550]: 2026-01-22 14:13:24.057436003 +0000 UTC m=+0.048307729 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:13:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:24 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:24 compute-1 ceph-mon[81715]: pgmap v1360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:25 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:25 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:26 compute-1 ceph-mon[81715]: pgmap v1361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:27 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:28 compute-1 ceph-mon[81715]: pgmap v1362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:28 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:29 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:30 compute-1 ceph-mon[81715]: pgmap v1363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:30 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:31 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:31 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:32 compute-1 ceph-mon[81715]: pgmap v1364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:32 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:33 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:34.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:34 compute-1 ceph-mon[81715]: pgmap v1365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:34 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:35 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:35 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:36 compute-1 ceph-mon[81715]: pgmap v1366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:36 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:37 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:38 compute-1 ceph-mon[81715]: pgmap v1367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:38 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:39 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:40 compute-1 ceph-mon[81715]: pgmap v1368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/914354036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:40 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1473123618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3392952699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:41 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2960319933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:42 compute-1 ceph-mon[81715]: pgmap v1369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:42 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:43 compute-1 podman[228569]: 2026-01-22 14:13:43.101879686 +0000 UTC m=+0.094689295 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:13:44 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:45 compute-1 ceph-mon[81715]: pgmap v1370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:45 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:46 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:46 compute-1 ceph-mon[81715]: pgmap v1371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:46 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:47 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:13:48 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:48 compute-1 ceph-mon[81715]: pgmap v1372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:49 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:50 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:50 compute-1 ceph-mon[81715]: pgmap v1373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:50.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:51 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:51 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:52 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:52 compute-1 ceph-mon[81715]: pgmap v1374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:52.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:53 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:54 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:54 compute-1 ceph-mon[81715]: pgmap v1375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:55 compute-1 podman[228597]: 2026-01-22 14:13:55.064475642 +0000 UTC m=+0.054821391 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:13:55 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:55 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:55 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:56 compute-1 ceph-mon[81715]: pgmap v1376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:56 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:57 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:58 compute-1 ceph-mon[81715]: pgmap v1377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:58 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:13:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:59 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:00 compute-1 ceph-mon[81715]: pgmap v1378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:00 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:00 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:00.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:01 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:02 compute-1 ceph-mon[81715]: pgmap v1379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:02 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:03 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:05 compute-1 ceph-mon[81715]: pgmap v1380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:05 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:06 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:06 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:07 compute-1 ceph-mon[81715]: pgmap v1381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:07 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:08 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:08 compute-1 ceph-mon[81715]: pgmap v1382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:09 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:10 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:10 compute-1 ceph-mon[81715]: pgmap v1383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:10.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:11 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:11 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:12 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:12 compute-1 ceph-mon[81715]: pgmap v1384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:12 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:13 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:14 compute-1 podman[228616]: 2026-01-22 14:14:14.104573889 +0000 UTC m=+0.086710332 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 14:14:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:14 compute-1 ceph-mon[81715]: pgmap v1385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:14 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:15 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:15 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:17 compute-1 ceph-mon[81715]: pgmap v1386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:17 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:17 compute-1 sudo[228644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:17 compute-1 sudo[228644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:17 compute-1 sudo[228644]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:17 compute-1 sudo[228669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:14:17 compute-1 sudo[228669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:17 compute-1 sudo[228669]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:18 compute-1 sudo[228694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:18 compute-1 sudo[228694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:18 compute-1 sudo[228694]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:18 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:18 compute-1 sudo[228719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:14:18 compute-1 sudo[228719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:18 compute-1 podman[228814]: 2026-01-22 14:14:18.61973168 +0000 UTC m=+0.067470249 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:14:18 compute-1 podman[228814]: 2026-01-22 14:14:18.754120092 +0000 UTC m=+0.201858661 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 14:14:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:19 compute-1 ceph-mon[81715]: pgmap v1387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:14:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:14:19 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:19 compute-1 sudo[228719]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:19 compute-1 sudo[228938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:19 compute-1 sudo[228938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:19 compute-1 sudo[228938]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:19 compute-1 sudo[228963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:14:19 compute-1 sudo[228963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:19 compute-1 sudo[228963]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:19 compute-1 sudo[228988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:19 compute-1 sudo[228988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:19 compute-1 sudo[228988]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:19 compute-1 sudo[229013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:14:19 compute-1 sudo[229013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:19 compute-1 sudo[229013]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:20 compute-1 ceph-mon[81715]: pgmap v1388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:14:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:14:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:20.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:21 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:21 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:22 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:22 compute-1 ceph-mon[81715]: pgmap v1389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:22.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:23 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:24 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:24 compute-1 ceph-mon[81715]: pgmap v1390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:25 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:26 compute-1 podman[229070]: 2026-01-22 14:14:26.070475805 +0000 UTC m=+0.055719156 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:14:26 compute-1 sudo[229090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:26 compute-1 sudo[229090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:26 compute-1 sudo[229090]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:26 compute-1 ceph-mon[81715]: pgmap v1391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:26 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:26 compute-1 sudo[229115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:14:26 compute-1 sudo[229115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:26 compute-1 sudo[229115]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:27 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:29 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:29 compute-1 ceph-mon[81715]: pgmap v1392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:30 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:30 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:30 compute-1 ceph-mon[81715]: pgmap v1393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:31 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:31 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:32 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:32 compute-1 ceph-mon[81715]: pgmap v1394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:33 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.288876) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273288957, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1306, "num_deletes": 251, "total_data_size": 2307919, "memory_usage": 2338280, "flush_reason": "Manual Compaction"}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273300424, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1505369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38632, "largest_seqno": 39933, "table_properties": {"data_size": 1500082, "index_size": 2555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13785, "raw_average_key_size": 20, "raw_value_size": 1488459, "raw_average_value_size": 2251, "num_data_blocks": 110, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091196, "oldest_key_time": 1769091196, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 11593 microseconds, and 5073 cpu microseconds.
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.300480) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1505369 bytes OK
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.300504) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302178) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302192) EVENT_LOG_v1 {"time_micros": 1769091273302188, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2301526, prev total WAL file size 2301526, number of live WAL files 2.
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1470KB)], [72(10MB)]
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273302986, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12307079, "oldest_snapshot_seqno": -1}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7973 keys, 10596101 bytes, temperature: kUnknown
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273377944, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10596101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10547245, "index_size": 27816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 212770, "raw_average_key_size": 26, "raw_value_size": 10405996, "raw_average_value_size": 1305, "num_data_blocks": 1075, "num_entries": 7973, "num_filter_entries": 7973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.378243) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10596101 bytes
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.380296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.9 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 8490, records dropped: 517 output_compression: NoCompression
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.380322) EVENT_LOG_v1 {"time_micros": 1769091273380310, "job": 44, "event": "compaction_finished", "compaction_time_micros": 75070, "compaction_time_cpu_micros": 26789, "output_level": 6, "num_output_files": 1, "total_output_size": 10596101, "num_input_records": 8490, "num_output_records": 7973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273380856, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273382912, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.382999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:34 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:34 compute-1 ceph-mon[81715]: pgmap v1395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:14:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 8392 writes, 31K keys, 8392 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8392 writes, 2025 syncs, 4.14 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1095 writes, 3258 keys, 1095 commit groups, 1.0 writes per commit group, ingest: 2.59 MB, 0.00 MB/s
                                           Interval WAL: 1095 writes, 476 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:14:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:35 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:36 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:36 compute-1 ceph-mon[81715]: pgmap v1396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:36 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:37 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:38 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:38 compute-1 ceph-mon[81715]: pgmap v1397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:39 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:40 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:40 compute-1 ceph-mon[81715]: pgmap v1398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:41 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:41 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:42 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:42 compute-1 ceph-mon[81715]: pgmap v1399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1519008089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:43 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2997728244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2614008316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:44 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:44 compute-1 ceph-mon[81715]: pgmap v1400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2571695261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:44 compute-1 podman[229140]: 2026-01-22 14:14:44.634396014 +0000 UTC m=+0.087051331 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 14:14:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:45 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:46 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:46 compute-1 ceph-mon[81715]: pgmap v1401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:46 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:14:47 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:48 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-1 ceph-mon[81715]: pgmap v1402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:48 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:49 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:50 compute-1 ceph-mon[81715]: pgmap v1403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:50 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:50 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:51 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:14:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:14:52 compute-1 ceph-mon[81715]: pgmap v1404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:52 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:53 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:55 compute-1 ceph-mon[81715]: pgmap v1405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:55 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:56 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:56 compute-1 ceph-mon[81715]: pgmap v1406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:56 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:57 compute-1 podman[229169]: 2026-01-22 14:14:57.057323241 +0000 UTC m=+0.046893001 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 14:14:57 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:58 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-1 ceph-mon[81715]: pgmap v1407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:58 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:14:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:59 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:00 compute-1 ceph-mon[81715]: pgmap v1408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:00 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:00 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:01 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:02 compute-1 ceph-mon[81715]: pgmap v1409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:02 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:03 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:04 compute-1 ceph-mon[81715]: pgmap v1410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:04 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:06 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:06 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:07 compute-1 ceph-mon[81715]: pgmap v1411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:07 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:08 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:09 compute-1 ceph-mon[81715]: pgmap v1412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:09 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:10 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:10 compute-1 ceph-mon[81715]: pgmap v1413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:10.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:11 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:11 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:12 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:12 compute-1 ceph-mon[81715]: pgmap v1414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:13 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:14 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:14 compute-1 ceph-mon[81715]: pgmap v1415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:14.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:15 compute-1 podman[229189]: 2026-01-22 14:15:15.129835572 +0000 UTC m=+0.114483243 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:15:15 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:16 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:16 compute-1 ceph-mon[81715]: pgmap v1416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:16 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:16.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:17 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:15:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:15:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:18.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:18 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:18 compute-1 ceph-mon[81715]: pgmap v1417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:19 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:19 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:20.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:20 compute-1 ceph-mon[81715]: pgmap v1418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:20 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:20 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:22 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:23 compute-1 ceph-mon[81715]: pgmap v1419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:23 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:24 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:24 compute-1 ceph-mon[81715]: pgmap v1420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:25 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-1 sudo[229216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:26 compute-1 sudo[229216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-1 sudo[229216]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-1 sudo[229241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:15:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:26 compute-1 sudo[229241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:26 compute-1 sudo[229241]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-1 sudo[229266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:26 compute-1 sudo[229266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-1 sudo[229266]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-1 sudo[229291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:15:26 compute-1 sudo[229291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-1 ceph-mon[81715]: pgmap v1421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:26 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:27 compute-1 sudo[229291]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:28 compute-1 podman[229347]: 2026-01-22 14:15:28.055287303 +0000 UTC m=+0.049849991 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:28 compute-1 ceph-mon[81715]: pgmap v1422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:15:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:15:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:29 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:30.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:31 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:31 compute-1 ceph-mon[81715]: pgmap v1423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:15:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7321 writes, 40K keys, 7321 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 7321 writes, 7321 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1836 writes, 9586 keys, 1836 commit groups, 1.0 writes per commit group, ingest: 16.50 MB, 0.03 MB/s
                                           Interval WAL: 1836 writes, 1836 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.8      0.90              0.14        22    0.041       0      0       0.0       0.0
                                             L6      1/0   10.11 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    130.6    109.8      1.66              0.51        21    0.079    135K    12K       0.0       0.0
                                            Sum      1/0   10.11 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1     84.8     88.7      2.55              0.65        43    0.059    135K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    135.1    138.4      0.48              0.20        12    0.040     48K   4092       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    130.6    109.8      1.66              0.51        21    0.079    135K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.9      0.89              0.14        21    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.044, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.09 MB/s write, 0.21 GB read, 0.09 MB/s read, 2.6 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 23.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000151 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1252,22.65 MB,7.45014%) FilterBlock(43,388.92 KB,0.124936%) IndexBlock(43,580.58 KB,0.186504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:15:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:32 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:32 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:32 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:32 compute-1 ceph-mon[81715]: pgmap v1424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:32 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:34 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:35 compute-1 ceph-mon[81715]: pgmap v1425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:35 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:36 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:36.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:37 compute-1 ceph-mon[81715]: pgmap v1426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:37 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:37 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:38 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:38 compute-1 ceph-mon[81715]: pgmap v1427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:39 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:39 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:40.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:40 compute-1 ceph-mon[81715]: pgmap v1428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:40 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:41 compute-1 sudo[229368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:41 compute-1 sudo[229368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:41 compute-1 sudo[229368]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:41 compute-1 sudo[229393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:15:41 compute-1 sudo[229393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:41 compute-1 sudo[229393]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:42 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:42 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:43 compute-1 ceph-mon[81715]: pgmap v1429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:43 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:44 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-1 ceph-mon[81715]: pgmap v1430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3600868494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:44 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1191107956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3159148396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:45 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:46 compute-1 podman[229418]: 2026-01-22 14:15:46.103060363 +0000 UTC m=+0.087070305 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:15:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:46 compute-1 ceph-mon[81715]: pgmap v1431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:46 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3975816590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:46 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:46 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:15:47 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:48 compute-1 ceph-mon[81715]: pgmap v1432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:48 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:50 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:51 compute-1 ceph-mon[81715]: pgmap v1433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:51 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:52 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:52 compute-1 ceph-mon[81715]: pgmap v1434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:52 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:52 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:54 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:54.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:55 compute-1 ceph-mon[81715]: pgmap v1435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:55 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:15:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:56.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:15:56 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:56 compute-1 ceph-mon[81715]: pgmap v1436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:57.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:57 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:57 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:58.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:15:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:59 compute-1 podman[229444]: 2026-01-22 14:15:59.070643051 +0000 UTC m=+0.056369038 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:15:59 compute-1 ceph-mon[81715]: pgmap v1437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:59 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:16:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:00.411 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:16:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:00.412 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:16:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:16:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:00.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:16:00 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:01 compute-1 ceph-mon[81715]: pgmap v1438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:01 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:01 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:02.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:03 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:03 compute-1 ceph-mon[81715]: pgmap v1439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:03 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:04 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:04.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:05.415 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:16:05 compute-1 ceph-mon[81715]: pgmap v1440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:05 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:07 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-1 ceph-mon[81715]: pgmap v1441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:07 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:08 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:08 compute-1 ceph-mon[81715]: pgmap v1442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:10 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:10.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:11 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:11 compute-1 ceph-mon[81715]: pgmap v1443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.793480) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371793928, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1482, "num_deletes": 251, "total_data_size": 2779374, "memory_usage": 2815816, "flush_reason": "Manual Compaction"}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371803517, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 1150158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39938, "largest_seqno": 41415, "table_properties": {"data_size": 1145332, "index_size": 2030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14943, "raw_average_key_size": 21, "raw_value_size": 1133865, "raw_average_value_size": 1652, "num_data_blocks": 88, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091274, "oldest_key_time": 1769091274, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 10092 microseconds, and 5026 cpu microseconds.
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.803578) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 1150158 bytes OK
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.803604) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805020) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805034) EVENT_LOG_v1 {"time_micros": 1769091371805030, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805054) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2772229, prev total WAL file size 2772229, number of live WAL files 2.
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(1123KB)], [75(10MB)]
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371805987, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 11746259, "oldest_snapshot_seqno": -1}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 8187 keys, 8509732 bytes, temperature: kUnknown
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371858165, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 8509732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8463294, "index_size": 24886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 218105, "raw_average_key_size": 26, "raw_value_size": 8322082, "raw_average_value_size": 1016, "num_data_blocks": 953, "num_entries": 8187, "num_filter_entries": 8187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:16:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.858869) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8509732 bytes
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.860344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 162.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.6) write-amplify(7.4) OK, records in: 8659, records dropped: 472 output_compression: NoCompression
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.860360) EVENT_LOG_v1 {"time_micros": 1769091371860352, "job": 46, "event": "compaction_finished", "compaction_time_micros": 52264, "compaction_time_cpu_micros": 24719, "output_level": 6, "num_output_files": 1, "total_output_size": 8509732, "num_input_records": 8659, "num_output_records": 8187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371860782, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371862942, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.862996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:13 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:13 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:13 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:13 compute-1 ceph-mon[81715]: pgmap v1444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:14 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:14 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:14 compute-1 ceph-mon[81715]: pgmap v1445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:14.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:16 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:16 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:17 compute-1 podman[229463]: 2026-01-22 14:16:17.095662035 +0000 UTC m=+0.086527631 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 14:16:17 compute-1 ceph-mon[81715]: pgmap v1446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:17 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:17 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:18 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:18 compute-1 ceph-mon[81715]: pgmap v1447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:18 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:16:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:16:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:20 compute-1 ceph-mon[81715]: pgmap v1448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:21 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:22 compute-1 ceph-mon[81715]: pgmap v1449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:22 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:24 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:25 compute-1 ceph-mon[81715]: pgmap v1450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:25 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:27 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:28 compute-1 ceph-mon[81715]: pgmap v1451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:28 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:28 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:28 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:30 compute-1 ceph-mon[81715]: pgmap v1452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:30 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:30 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:30 compute-1 podman[229490]: 2026-01-22 14:16:30.061569975 +0000 UTC m=+0.051402802 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 14:16:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:31 compute-1 ceph-mon[81715]: pgmap v1453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:31 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:32 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:32 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:32.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:33 compute-1 ceph-mon[81715]: pgmap v1454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:33 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:34 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:34 compute-1 ceph-mon[81715]: pgmap v1455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:36 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:36 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:37 compute-1 ceph-mon[81715]: pgmap v1456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:37 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:37 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:38 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:38 compute-1 ceph-mon[81715]: pgmap v1457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:39 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:40.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:41 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:41 compute-1 ceph-mon[81715]: pgmap v1458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:41 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:42 compute-1 sudo[229509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:42 compute-1 sudo[229509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-1 sudo[229509]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-1 sudo[229534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:16:42 compute-1 sudo[229534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-1 sudo[229534]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-1 sudo[229559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:42 compute-1 sudo[229559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-1 sudo[229559]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-1 sudo[229584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:16:42 compute-1 sudo[229584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:42 compute-1 sudo[229584]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:43.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:43 compute-1 sudo[229629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:43 compute-1 sudo[229629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:43 compute-1 sudo[229629]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:43 compute-1 sudo[229654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:16:43 compute-1 sudo[229654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:43 compute-1 sudo[229654]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:43 compute-1 sudo[229679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:43 compute-1 sudo[229679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:43 compute-1 sudo[229679]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:43 compute-1 sudo[229704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:16:43 compute-1 sudo[229704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:43 compute-1 ceph-mon[81715]: pgmap v1459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:43 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:43 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:43 compute-1 sudo[229704]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:44.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:44 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:44 compute-1 ceph-mon[81715]: pgmap v1460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:16:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:16:44 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:45.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:46 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3043854192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:46 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:46 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1336531881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:47 compute-1 ceph-mon[81715]: pgmap v1461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:47 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/353905065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:47.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:16:48 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/336408558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:48 compute-1 podman[229759]: 2026-01-22 14:16:48.102788647 +0000 UTC m=+0.087983260 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 14:16:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:48.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:49 compute-1 ceph-mon[81715]: pgmap v1462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:49 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:50 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:16:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:16:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:51.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:51 compute-1 ceph-mon[81715]: pgmap v1463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:51 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:51 compute-1 sudo[229786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:51 compute-1 sudo[229786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:51 compute-1 sudo[229786]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:51 compute-1 sudo[229811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:16:51 compute-1 sudo[229811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:51 compute-1 sudo[229811]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:52 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:52 compute-1 ceph-mon[81715]: pgmap v1464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:52 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:53.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:53 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:54 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:54 compute-1 ceph-mon[81715]: pgmap v1465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:55 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:56 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:56 compute-1 ceph-mon[81715]: pgmap v1466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:57.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:57 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:57 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:16:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:59 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:59 compute-1 ceph-mon[81715]: pgmap v1467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:59 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:01 compute-1 podman[229836]: 2026-01-22 14:17:01.061959076 +0000 UTC m=+0.054082395 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:17:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:02 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:02 compute-1 ceph-mon[81715]: pgmap v1468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 22 14:17:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:04 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:04 compute-1 ceph-mon[81715]: pgmap v1469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:17:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:05 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:05 compute-1 ceph-mon[81715]: pgmap v1470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 22 14:17:05 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:06 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:17:06 compute-1 ceph-mon[81715]: pgmap v1471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:17:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:08 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:08 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:09.328 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:17:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:09.329 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:17:09 compute-1 ceph-mon[81715]: pgmap v1472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:17:09 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:10 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:10.331 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:17:10 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:10 compute-1 ceph-mon[81715]: pgmap v1473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 22 14:17:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:11 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:11 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:12 compute-1 ceph-mon[81715]: pgmap v1474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 14:17:12 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:12 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:13 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:14.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:14 compute-1 ceph-mon[81715]: pgmap v1475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Jan 22 14:17:14 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:15.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:16 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:17 compute-1 ceph-mon[81715]: pgmap v1476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 22 14:17:17 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:17 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:18 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:18 compute-1 ceph-mon[81715]: pgmap v1477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:17:18 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:17:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:17:19 compute-1 podman[229855]: 2026-01-22 14:17:19.101009868 +0000 UTC m=+0.088026162 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:17:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:19 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:21 compute-1 ceph-mon[81715]: pgmap v1478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:17:21 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:22 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:23 compute-1 ceph-mon[81715]: pgmap v1479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Jan 22 14:17:23 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:23 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:24 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:24 compute-1 ceph-mon[81715]: pgmap v1480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 14:17:24 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:26 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:28 compute-1 ceph-mon[81715]: pgmap v1481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Jan 22 14:17:28 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:28 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:29 compute-1 ceph-mon[81715]: pgmap v1482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:29 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:30 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:31 compute-1 ceph-mon[81715]: pgmap v1483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:31 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:32 compute-1 podman[229881]: 2026-01-22 14:17:32.08591029 +0000 UTC m=+0.069612938 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:17:32 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:32 compute-1 ceph-mon[81715]: pgmap v1484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:32 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:33 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.284653) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454284734, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1242, "num_deletes": 251, "total_data_size": 2383867, "memory_usage": 2420128, "flush_reason": "Manual Compaction"}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454296943, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1568060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41420, "largest_seqno": 42657, "table_properties": {"data_size": 1562724, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13558, "raw_average_key_size": 21, "raw_value_size": 1551283, "raw_average_value_size": 2405, "num_data_blocks": 111, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091372, "oldest_key_time": 1769091372, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 12298 microseconds, and 4985 cpu microseconds.
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.296990) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1568060 bytes OK
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.297016) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298634) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298649) EVENT_LOG_v1 {"time_micros": 1769091454298644, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2377682, prev total WAL file size 2377682, number of live WAL files 2.
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1531KB)], [78(8310KB)]
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454299789, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10077792, "oldest_snapshot_seqno": -1}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 8315 keys, 8450538 bytes, temperature: kUnknown
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454354051, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8450538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403399, "index_size": 25267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 222093, "raw_average_key_size": 26, "raw_value_size": 8259917, "raw_average_value_size": 993, "num_data_blocks": 963, "num_entries": 8315, "num_filter_entries": 8315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.354372) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8450538 bytes
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.355641) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 8832, records dropped: 517 output_compression: NoCompression
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.355952) EVENT_LOG_v1 {"time_micros": 1769091454355651, "job": 48, "event": "compaction_finished", "compaction_time_micros": 54373, "compaction_time_cpu_micros": 23417, "output_level": 6, "num_output_files": 1, "total_output_size": 8450538, "num_input_records": 8832, "num_output_records": 8315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454356426, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454358459, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:35 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:35 compute-1 ceph-mon[81715]: pgmap v1485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:35 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:36 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:36 compute-1 ceph-mon[81715]: pgmap v1486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:36.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:37 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:37 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:39 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:39 compute-1 ceph-mon[81715]: pgmap v1487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:39 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:40 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:40 compute-1 ceph-mon[81715]: pgmap v1488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:40.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:42 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:42 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:42 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:43 compute-1 ceph-mon[81715]: pgmap v1489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:43 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:44 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:44 compute-1 ceph-mon[81715]: pgmap v1490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:45 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:47 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:47 compute-1 ceph-mon[81715]: pgmap v1491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:47 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3921556699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:47 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.453 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.454 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.454 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:17:48 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/490131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3359774298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:49 compute-1 ceph-mon[81715]: pgmap v1492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:49 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1398042159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:50 compute-1 podman[229901]: 2026-01-22 14:17:50.151817596 +0000 UTC m=+0.148639605 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:17:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:17:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:50.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:17:50 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:50 compute-1 ceph-mon[81715]: pgmap v1493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:51 compute-1 sudo[229927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:51 compute-1 sudo[229927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-1 sudo[229927]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-1 sudo[229952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:17:51 compute-1 sudo[229952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-1 sudo[229952]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-1 sudo[229977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:51 compute-1 sudo[229977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-1 sudo[229977]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-1 sudo[230002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:17:51 compute-1 sudo[230002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:52 compute-1 sudo[230002]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:52 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:52.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:53 compute-1 ceph-mon[81715]: pgmap v1494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:53 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:53 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:17:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:17:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:17:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:17:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:55 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:55 compute-1 ceph-mon[81715]: pgmap v1495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:55 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:57 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:17:58 compute-1 ceph-mon[81715]: pgmap v1496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:58 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:58 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:17:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:17:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:00 compute-1 ceph-mon[81715]: pgmap v1497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:00 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:00 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:00.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:01.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:01 compute-1 ceph-mon[81715]: pgmap v1498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:18:01 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:18:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:18:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:02 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3882272731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:02 compute-1 ceph-mon[81715]: pgmap v1499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:02 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:03 compute-1 podman[230056]: 2026-01-22 14:18:03.084151457 +0000 UTC m=+0.066538746 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:18:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:03.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:03 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:18:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:05 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:05 compute-1 ceph-mon[81715]: pgmap v1500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 272 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 579 KiB/s wr, 11 op/s
Jan 22 14:18:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:07.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:07 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-1 ceph-mon[81715]: pgmap v1501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:08 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:08 compute-1 ceph-mon[81715]: pgmap v1502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:09.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:09 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:10 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:10 compute-1 ceph-mon[81715]: pgmap v1503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:10 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:10.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:11 compute-1 sudo[230076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:11 compute-1 sudo[230076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:11 compute-1 sudo[230076]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:11 compute-1 sudo[230101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:18:11 compute-1 sudo[230101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:11 compute-1 sudo[230101]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.863840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491863876, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 726, "num_deletes": 255, "total_data_size": 1207956, "memory_usage": 1229768, "flush_reason": "Manual Compaction"}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491870792, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 784551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42662, "largest_seqno": 43383, "table_properties": {"data_size": 780920, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9036, "raw_average_key_size": 19, "raw_value_size": 773296, "raw_average_value_size": 1703, "num_data_blocks": 61, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091454, "oldest_key_time": 1769091454, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7037 microseconds, and 3469 cpu microseconds.
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870867) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 784551 bytes OK
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870899) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872290) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872312) EVENT_LOG_v1 {"time_micros": 1769091491872304, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1203891, prev total WAL file size 1203891, number of live WAL files 2.
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.873289) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(766KB)], [81(8252KB)]
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491873367, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 9235089, "oldest_snapshot_seqno": -1}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 8244 keys, 9067902 bytes, temperature: kUnknown
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491923977, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 9067902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9020306, "index_size": 25852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 221876, "raw_average_key_size": 26, "raw_value_size": 8877151, "raw_average_value_size": 1076, "num_data_blocks": 985, "num_entries": 8244, "num_filter_entries": 8244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.924259) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9067902 bytes
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.925736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 178.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 8769, records dropped: 525 output_compression: NoCompression
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.925756) EVENT_LOG_v1 {"time_micros": 1769091491925747, "job": 50, "event": "compaction_finished", "compaction_time_micros": 50688, "compaction_time_cpu_micros": 27733, "output_level": 6, "num_output_files": 1, "total_output_size": 9067902, "num_input_records": 8769, "num_output_records": 8244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491926124, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491927704, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.873104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:12 compute-1 ceph-mon[81715]: pgmap v1504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:13.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:13 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:13 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:14 compute-1 ceph-mon[81715]: pgmap v1505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:16.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:16 compute-1 ceph-mon[81715]: pgmap v1506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 1.2 MiB/s wr, 4 op/s
Jan 22 14:18:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:19 compute-1 ceph-mon[81715]: pgmap v1507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:18:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:18:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:20 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:21 compute-1 podman[230126]: 2026-01-22 14:18:21.101714916 +0000 UTC m=+0.095883895 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 14:18:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:21 compute-1 ceph-mon[81715]: pgmap v1508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:22 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:22 compute-1 ceph-mon[81715]: pgmap v1509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:22 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:24 compute-1 ceph-mon[81715]: pgmap v1510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:25 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:18:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:18:27 compute-1 ceph-mon[81715]: pgmap v1511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:27 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:30 compute-1 ceph-mon[81715]: pgmap v1512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:30 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:31 compute-1 ceph-mon[81715]: pgmap v1513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:32 compute-1 ceph-mon[81715]: pgmap v1514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:32.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:33 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:34 compute-1 podman[230152]: 2026-01-22 14:18:34.056865525 +0000 UTC m=+0.047068605 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:18:34 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:34 compute-1 ceph-mon[81715]: pgmap v1515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:34 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:34.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:35 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:36 compute-1 ceph-mon[81715]: pgmap v1516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:36 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:37 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:39 compute-1 ceph-mon[81715]: pgmap v1517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:39 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:40 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:41 compute-1 ceph-mon[81715]: pgmap v1518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:41 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:42 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:42 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:43 compute-1 ceph-mon[81715]: pgmap v1519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:43 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:44 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:45 compute-1 ceph-mon[81715]: pgmap v1520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:45 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:46 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:46 compute-1 ceph-mon[81715]: pgmap v1521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:46.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:47 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:47 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:18:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:48 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:48 compute-1 ceph-mon[81715]: pgmap v1522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:49 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1217922939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2271535365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:50 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/545743632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:50 compute-1 ceph-mon[81715]: pgmap v1523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3014372553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:51 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:52 compute-1 podman[230172]: 2026-01-22 14:18:52.081931645 +0000 UTC m=+0.075560022 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 22 14:18:52 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:52 compute-1 ceph-mon[81715]: pgmap v1524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:53 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:53 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:54 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:54 compute-1 ceph-mon[81715]: pgmap v1525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:54.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:55 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:56 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:56 compute-1 ceph-mon[81715]: pgmap v1526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:57 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:58 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:58 compute-1 ceph-mon[81715]: pgmap v1527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:58 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:59 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:18:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:00 compute-1 ceph-mon[81715]: pgmap v1528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:00 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:01 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:02 compute-1 ceph-mon[81715]: pgmap v1529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:02 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:02 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:02.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:03 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:04 compute-1 ceph-mon[81715]: pgmap v1530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:04 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:04.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:05 compute-1 podman[230198]: 2026-01-22 14:19:05.069068427 +0000 UTC m=+0.064321815 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:19:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 14:19:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 14:19:05 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:06 compute-1 ceph-mon[81715]: pgmap v1531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:06 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:06.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:07.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:07 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:07 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:08 compute-1 ceph-mon[81715]: pgmap v1532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:08 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:10 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:11 compute-1 ceph-mon[81715]: pgmap v1533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:11 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:11 compute-1 sudo[230217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:11 compute-1 sudo[230217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:11 compute-1 sudo[230217]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-1 sudo[230242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:19:12 compute-1 sudo[230242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-1 sudo[230242]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-1 sudo[230267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:12 compute-1 sudo[230267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-1 sudo[230267]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-1 sudo[230292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:19:12 compute-1 sudo[230292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:12 compute-1 sudo[230292]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:13 compute-1 ceph-mon[81715]: pgmap v1534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:13 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:13 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:19:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:19:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:14 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:14 compute-1 ceph-mon[81715]: pgmap v1535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:15.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:15 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:16 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:16 compute-1 ceph-mon[81715]: pgmap v1536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:17 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:18 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:18 compute-1 ceph-mon[81715]: pgmap v1537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:19:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:19:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:19 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:19 compute-1 sudo[230347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:19 compute-1 sudo[230347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:19 compute-1 sudo[230347]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:19 compute-1 sudo[230372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:19:19 compute-1 sudo[230372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:19 compute-1 sudo[230372]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:20 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:20 compute-1 ceph-mon[81715]: pgmap v1538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:21.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:22 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:23 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-1 ceph-mon[81715]: pgmap v1539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:23 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:23 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-1 podman[230397]: 2026-01-22 14:19:23.094496562 +0000 UTC m=+0.085179844 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:19:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:24 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:25 compute-1 ceph-mon[81715]: pgmap v1540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:25 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:26 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:27.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:27 compute-1 ceph-mon[81715]: pgmap v1541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:27 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:27 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:28 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:29.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:29 compute-1 ceph-mon[81715]: pgmap v1542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:29 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:30 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:31.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:31 compute-1 ceph-mon[81715]: pgmap v1543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:31 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:33.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:33 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:34 compute-1 ceph-mon[81715]: pgmap v1544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:34 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:34 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:34 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:35.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:35 compute-1 ceph-mon[81715]: pgmap v1545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:35 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:36 compute-1 podman[230423]: 2026-01-22 14:19:36.063503989 +0000 UTC m=+0.052628285 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 14:19:36 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:37 compute-1 ceph-mon[81715]: pgmap v1546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:37 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:38 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:39 compute-1 ceph-mon[81715]: pgmap v1547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:39 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:40 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:40 compute-1 ceph-mon[81715]: pgmap v1548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:41 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:42 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:42 compute-1 ceph-mon[81715]: pgmap v1549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:42 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:43.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:43 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:44 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:44 compute-1 ceph-mon[81715]: pgmap v1550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:45 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:46 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:46 compute-1 ceph-mon[81715]: pgmap v1551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:47.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:19:47 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:47 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:48 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:48 compute-1 ceph-mon[81715]: pgmap v1552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:49 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/578781724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:49 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:50 compute-1 ceph-mon[81715]: pgmap v1553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1661653774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:50 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1314543867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:51 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2930567293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:52 compute-1 ceph-mon[81715]: pgmap v1554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:52 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:52 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:53 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:54 compute-1 podman[230444]: 2026-01-22 14:19:54.131500363 +0000 UTC m=+0.103359740 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:19:54 compute-1 ceph-mon[81715]: pgmap v1555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:54 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:55 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:56 compute-1 ceph-mon[81715]: pgmap v1556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:56 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:57.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:58 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:58 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:59.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:59 compute-1 ceph-mon[81715]: pgmap v1557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:59 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:19:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:00 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:00 compute-1 ceph-mon[81715]: pgmap v1558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 14:20:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 14:20:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:01 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:01 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:02 compute-1 ceph-mon[81715]: pgmap v1559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:02 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:02 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:04 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:05.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:05 compute-1 ceph-mon[81715]: pgmap v1560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:05 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:06 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:07 compute-1 podman[230473]: 2026-01-22 14:20:07.071388956 +0000 UTC m=+0.060813750 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:20:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:07 compute-1 ceph-mon[81715]: pgmap v1561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:07 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:07 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:09 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-1 ceph-mon[81715]: pgmap v1562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:09 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:10 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:10 compute-1 ceph-mon[81715]: pgmap v1563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:12 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:12 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:13 compute-1 ceph-mon[81715]: pgmap v1564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:13 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:14 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:14 compute-1 ceph-mon[81715]: pgmap v1565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:15 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:16 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:16 compute-1 ceph-mon[81715]: pgmap v1566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:17 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:17 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:18 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:18 compute-1 ceph-mon[81715]: pgmap v1567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:20:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:20:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:19.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:19 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:19 compute-1 sudo[230494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:19 compute-1 sudo[230494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-1 sudo[230494]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-1 sudo[230519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:20:19 compute-1 sudo[230519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-1 sudo[230519]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-1 sudo[230544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:19 compute-1 sudo[230544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-1 sudo[230544]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-1 sudo[230569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:20:19 compute-1 sudo[230569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:20 compute-1 sudo[230569]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:20 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:20 compute-1 ceph-mon[81715]: pgmap v1568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:20:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:21 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:20:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:20:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:20:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:20:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:22 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:22 compute-1 ceph-mon[81715]: pgmap v1569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:23 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:23 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:24 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:24 compute-1 ceph-mon[81715]: pgmap v1570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:25 compute-1 podman[230626]: 2026-01-22 14:20:25.108779708 +0000 UTC m=+0.105048767 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:20:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:25 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:20:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:20:26 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:26 compute-1 ceph-mon[81715]: pgmap v1571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:27 compute-1 sudo[230653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:27 compute-1 sudo[230653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:27 compute-1 sudo[230653]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:27 compute-1 sudo[230678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:20:27 compute-1 sudo[230678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:27 compute-1 sudo[230678]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:27 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.914627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627914771, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2000, "num_deletes": 251, "total_data_size": 3801160, "memory_usage": 3860928, "flush_reason": "Manual Compaction"}
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627929338, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 2486047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43388, "largest_seqno": 45383, "table_properties": {"data_size": 2478446, "index_size": 4159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19604, "raw_average_key_size": 21, "raw_value_size": 2461643, "raw_average_value_size": 2664, "num_data_blocks": 180, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091492, "oldest_key_time": 1769091492, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 14718 microseconds, and 6636 cpu microseconds.
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.929410) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 2486047 bytes OK
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.929439) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930941) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930960) EVENT_LOG_v1 {"time_micros": 1769091627930954, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930982) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3791865, prev total WAL file size 3791865, number of live WAL files 2.
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.932356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(2427KB)], [84(8855KB)]
Jan 22 14:20:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627932423, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11553949, "oldest_snapshot_seqno": -1}
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 8653 keys, 9903981 bytes, temperature: kUnknown
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628000367, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9903981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9853360, "index_size": 27853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 232018, "raw_average_key_size": 26, "raw_value_size": 9702513, "raw_average_value_size": 1121, "num_data_blocks": 1064, "num_entries": 8653, "num_filter_entries": 8653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.000885) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9903981 bytes
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.002272) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 145.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 9168, records dropped: 515 output_compression: NoCompression
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.002295) EVENT_LOG_v1 {"time_micros": 1769091628002284, "job": 52, "event": "compaction_finished", "compaction_time_micros": 68202, "compaction_time_cpu_micros": 33140, "output_level": 6, "num_output_files": 1, "total_output_size": 9903981, "num_input_records": 9168, "num_output_records": 8653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628003568, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628006276, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.932267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:28 compute-1 ceph-mon[81715]: pgmap v1572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:28 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:29.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:30 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:31 compute-1 ceph-mon[81715]: pgmap v1573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:31 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:32 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:32 compute-1 ceph-mon[81715]: pgmap v1574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:32 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:33 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:34 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:34 compute-1 ceph-mon[81715]: pgmap v1575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:35.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:35 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:36 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:36 compute-1 ceph-mon[81715]: pgmap v1576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:37 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:37 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:38 compute-1 podman[230703]: 2026-01-22 14:20:38.071572376 +0000 UTC m=+0.066030831 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 14:20:38 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:38 compute-1 ceph-mon[81715]: pgmap v1577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:39 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:40 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:40 compute-1 ceph-mon[81715]: pgmap v1578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:41 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:42 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:42 compute-1 ceph-mon[81715]: pgmap v1579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:43 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:43 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:43.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:44 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:44 compute-1 ceph-mon[81715]: pgmap v1580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:45 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.457 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:20:47 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:47 compute-1 ceph-mon[81715]: pgmap v1581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:48 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:48 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:48 compute-1 ceph-mon[81715]: pgmap v1582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:49.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:50 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:51 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-1 ceph-mon[81715]: pgmap v1583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3542653799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:51 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/76593589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:51 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:52 compute-1 ceph-mon[81715]: pgmap v1584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:52 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:52 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1954756605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:53.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:53 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:54 compute-1 ceph-mon[81715]: pgmap v1585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3394538097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:54 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:55 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:56 compute-1 podman[230722]: 2026-01-22 14:20:56.115237836 +0000 UTC m=+0.104202592 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:20:56 compute-1 ceph-mon[81715]: pgmap v1586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:56 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:57.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:57 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:57 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:58 compute-1 ceph-mon[81715]: pgmap v1587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:58 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:59.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:20:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:59 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:01 compute-1 ceph-mon[81715]: pgmap v1588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:01 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:01.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:02 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:03 compute-1 ceph-mon[81715]: pgmap v1589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:03 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:03 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:03.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:04 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:04 compute-1 ceph-mon[81715]: pgmap v1590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:04 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:05.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:06 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:07 compute-1 ceph-mon[81715]: pgmap v1591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:07 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:07.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:08 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:08 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:09 compute-1 podman[230748]: 2026-01-22 14:21:09.099392437 +0000 UTC m=+0.080588609 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:21:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:09 compute-1 ceph-mon[81715]: pgmap v1592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:09 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:10 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:10 compute-1 ceph-mon[81715]: pgmap v1593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:11 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:21:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:11.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:12 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:12 compute-1 ceph-mon[81715]: pgmap v1594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:13 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:13 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:14 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:14 compute-1 ceph-mon[81715]: pgmap v1595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:15 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:15.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:16 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:16 compute-1 ceph-mon[81715]: pgmap v1596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:17 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:18 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:18 compute-1 ceph-mon[81715]: pgmap v1597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:21:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:21:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:19 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:21 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:21 compute-1 ceph-mon[81715]: pgmap v1598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:22 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:22 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:23 compute-1 ceph-mon[81715]: pgmap v1599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:23 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:23 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:24 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:24 compute-1 ceph-mon[81715]: pgmap v1600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:25 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:26 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:26 compute-1 ceph-mon[81715]: pgmap v1601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:27 compute-1 podman[230768]: 2026-01-22 14:21:27.10384487 +0000 UTC m=+0.097397678 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 14:21:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:27 compute-1 sudo[230794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:27 compute-1 sudo[230794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-1 sudo[230794]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-1 sudo[230819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:21:27 compute-1 sudo[230819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-1 sudo[230819]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:27 compute-1 sudo[230844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:27 compute-1 sudo[230844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-1 sudo[230844]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-1 sudo[230869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:21:27 compute-1 sudo[230869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:27 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:27 compute-1 sudo[230869]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:29.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:29 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:29 compute-1 ceph-mon[81715]: pgmap v1602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:21:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:21:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:29.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:30 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:30 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:30 compute-1 ceph-mon[81715]: pgmap v1603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:31 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:32 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:32 compute-1 ceph-mon[81715]: pgmap v1604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:33 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:33 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:34 compute-1 sudo[230925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:34 compute-1 sudo[230925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-1 sudo[230925]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:34 compute-1 sudo[230950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:21:34 compute-1 sudo[230950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-1 sudo[230950]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:35 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:35 compute-1 ceph-mon[81715]: pgmap v1605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:36 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:36 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:37 compute-1 ceph-mon[81715]: pgmap v1606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:37 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:37.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:38 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:39 compute-1 ceph-mon[81715]: pgmap v1607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:39 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:39.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:40 compute-1 podman[230975]: 2026-01-22 14:21:40.068009537 +0000 UTC m=+0.054997721 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:21:40 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:40 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.699 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:21:40 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.700 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:21:40 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.700 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:21:41 compute-1 ceph-mon[81715]: pgmap v1608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:41 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:41.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:42 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:43.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:43 compute-1 ceph-mon[81715]: pgmap v1609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:43 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:43 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:44 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:44 compute-1 ceph-mon[81715]: pgmap v1610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:45.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:45 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:46 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:46 compute-1 ceph-mon[81715]: pgmap v1611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:21:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:21:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.458 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:47 compute-1 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:47 compute-1 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:48 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:48 compute-1 ceph-mon[81715]: pgmap v1612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:49.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:49 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:50 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:50 compute-1 ceph-mon[81715]: pgmap v1613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:51.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:51 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1686993375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1271797265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:52 compute-1 ceph-mon[81715]: pgmap v1614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:52 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/910785072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2844275053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:53.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:53 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2190805920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:53 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:55 compute-1 ceph-mon[81715]: pgmap v1615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 313 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 756 KiB/s wr, 14 op/s
Jan 22 14:21:55 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/699726964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:56 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:56 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3661583463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:57 compute-1 ceph-mon[81715]: pgmap v1616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 14:21:57 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:58 compute-1 podman[230994]: 2026-01-22 14:21:58.096099592 +0000 UTC m=+0.085163663 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:21:58 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:59 compute-1 ceph-mon[81715]: pgmap v1617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 14:21:59 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:21:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:00 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3894054400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:01 compute-1 ceph-mon[81715]: pgmap v1618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 22 14:22:01 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:01 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1264698857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:02 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1780644383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-1 ceph-mon[81715]: pgmap v1619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:22:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/336972401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/785481063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2130573325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:04 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:04 compute-1 ceph-mon[81715]: pgmap v1620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 369 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Jan 22 14:22:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:05 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:06.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:06 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:06 compute-1 ceph-mon[81715]: pgmap v1621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 150 op/s
Jan 22 14:22:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:07.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:07 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:07 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:08.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:08 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:08 compute-1 ceph-mon[81715]: pgmap v1622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 22 14:22:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:09 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:10.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:10 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:10 compute-1 ceph-mon[81715]: pgmap v1623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 450 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Jan 22 14:22:10 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:11 compute-1 podman[231021]: 2026-01-22 14:22:11.070799294 +0000 UTC m=+0.057969982 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:22:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.004000108s ======
Jan 22 14:22:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:12.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000108s
Jan 22 14:22:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:13.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:14 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:14 compute-1 ceph-mon[81715]: pgmap v1624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 213 op/s
Jan 22 14:22:14 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:14 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:15 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:15 compute-1 ceph-mon[81715]: pgmap v1625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Jan 22 14:22:15 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:16 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:17 compute-1 ceph-mon[81715]: pgmap v1626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Jan 22 14:22:17 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:17.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:18 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:18 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:22:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:22:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:22:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:22:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:19 compute-1 ceph-mon[81715]: pgmap v1627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 22 14:22:19 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:22:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:22:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:20 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:20 compute-1 ceph-mon[81715]: pgmap v1628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 22 14:22:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:21.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:21 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:22 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:22 compute-1 ceph-mon[81715]: pgmap v1629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Jan 22 14:22:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:23 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:23 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:24 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:24 compute-1 ceph-mon[81715]: pgmap v1630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 14:22:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:25 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:26 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:26 compute-1 ceph-mon[81715]: pgmap v1631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 14:22:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4155437798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:27.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:27 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:28 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:28 compute-1 ceph-mon[81715]: pgmap v1632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 3 op/s
Jan 22 14:22:29 compute-1 podman[231041]: 2026-01-22 14:22:29.091607582 +0000 UTC m=+0.085805420 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true)
Jan 22 14:22:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:29 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:30 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:30 compute-1 ceph-mon[81715]: pgmap v1633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 479 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 351 KiB/s wr, 6 op/s
Jan 22 14:22:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:31.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:31 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:33 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:33 compute-1 ceph-mon[81715]: pgmap v1634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 17 op/s
Jan 22 14:22:33 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:33.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:34 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:34 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:34 compute-1 sudo[231067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:34 compute-1 sudo[231067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-1 sudo[231067]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-1 sudo[231092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:22:34 compute-1 sudo[231092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-1 sudo[231092]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-1 sudo[231117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:34 compute-1 sudo[231117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-1 sudo[231117]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-1 sudo[231142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:22:34 compute-1 sudo[231142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:35 compute-1 ceph-mon[81715]: pgmap v1635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:35 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:35 compute-1 sudo[231142]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:36 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:22:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:22:37 compute-1 ceph-mon[81715]: pgmap v1636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:37 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:38 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:38 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:39 compute-1 ceph-mon[81715]: pgmap v1637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:39 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:22:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:22:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:40 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:40 compute-1 ceph-mon[81715]: pgmap v1638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 22 14:22:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:41 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:41 compute-1 sudo[231196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:41 compute-1 sudo[231196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:41 compute-1 sudo[231196]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:41 compute-1 podman[231220]: 2026-01-22 14:22:41.536561909 +0000 UTC m=+0.048815162 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:22:41 compute-1 sudo[231227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:22:41 compute-1 sudo[231227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:41 compute-1 sudo[231227]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:42.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:42 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:42 compute-1 ceph-mon[81715]: pgmap v1639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:22:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:43 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:43 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:43 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.891 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:22:43 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.892 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:22:43 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.892 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:22:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:44 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:44 compute-1 ceph-mon[81715]: pgmap v1640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:45 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:46.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:46 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:46 compute-1 ceph-mon[81715]: pgmap v1641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.290934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767291031, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 256, "total_data_size": 3945138, "memory_usage": 4010640, "flush_reason": "Manual Compaction"}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767307899, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 2581347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45388, "largest_seqno": 47425, "table_properties": {"data_size": 2573555, "index_size": 4350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19900, "raw_average_key_size": 21, "raw_value_size": 2556335, "raw_average_value_size": 2699, "num_data_blocks": 188, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091627, "oldest_key_time": 1769091627, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 16994 microseconds, and 7483 cpu microseconds.
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.307953) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 2581347 bytes OK
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.307975) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309571) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309585) EVENT_LOG_v1 {"time_micros": 1769091767309581, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309604) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3935628, prev total WAL file size 3935628, number of live WAL files 2.
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.310619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(2520KB)], [87(9671KB)]
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767310733, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 12485328, "oldest_snapshot_seqno": -1}
Jan 22 14:22:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 9075 keys, 12329526 bytes, temperature: kUnknown
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767373884, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12329526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12274142, "index_size": 31592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 242739, "raw_average_key_size": 26, "raw_value_size": 12113946, "raw_average_value_size": 1334, "num_data_blocks": 1217, "num_entries": 9075, "num_filter_entries": 9075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.374205) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12329526 bytes
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.375484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 195.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 9600, records dropped: 525 output_compression: NoCompression
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.375500) EVENT_LOG_v1 {"time_micros": 1769091767375492, "job": 54, "event": "compaction_finished", "compaction_time_micros": 63140, "compaction_time_cpu_micros": 29994, "output_level": 6, "num_output_files": 1, "total_output_size": 12329526, "num_input_records": 9600, "num_output_records": 9075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767376103, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767377876, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.310523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:47 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:48 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:48 compute-1 ceph-mon[81715]: pgmap v1642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:49 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:50 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:50 compute-1 ceph-mon[81715]: pgmap v1643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:51 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:52 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:52 compute-1 ceph-mon[81715]: pgmap v1644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:22:52 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:53 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4134154296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:54 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:54 compute-1 ceph-mon[81715]: pgmap v1645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3543596929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:55.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:55 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:56.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:56 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:56 compute-1 ceph-mon[81715]: pgmap v1646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:57 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/542191379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:57 compute-1 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3278762935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:58 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:22:58 compute-1 ceph-mon[81715]: pgmap v1647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:22:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:59 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:00 compute-1 podman[231266]: 2026-01-22 14:23:00.092471629 +0000 UTC m=+0.083525549 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:23:00 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:00 compute-1 ceph-mon[81715]: pgmap v1648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:01.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:01 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:02 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:02 compute-1 ceph-mon[81715]: pgmap v1649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:03 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:03 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:04 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:04 compute-1 ceph-mon[81715]: pgmap v1650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:05 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:05 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:06 compute-1 ceph-mon[81715]: pgmap v1651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:06 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:07 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:08 compute-1 ceph-mon[81715]: pgmap v1652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:08 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:09 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:10.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:10 compute-1 ceph-mon[81715]: pgmap v1653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:10 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:11 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:12 compute-1 podman[231292]: 2026-01-22 14:23:12.064681555 +0000 UTC m=+0.058295040 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:23:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:12 compute-1 ceph-mon[81715]: pgmap v1654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:12 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:12 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:13 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:14 compute-1 ceph-mon[81715]: pgmap v1655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:14 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:15.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:15 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:16 compute-1 ceph-mon[81715]: pgmap v1656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:16 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:17.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:17 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:17 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.986308) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797986347, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 661, "num_deletes": 251, "total_data_size": 873640, "memory_usage": 886184, "flush_reason": "Manual Compaction"}
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797991804, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 573365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47430, "largest_seqno": 48086, "table_properties": {"data_size": 570254, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8063, "raw_average_key_size": 19, "raw_value_size": 563778, "raw_average_value_size": 1375, "num_data_blocks": 42, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091767, "oldest_key_time": 1769091767, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5529 microseconds, and 2471 cpu microseconds.
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991839) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 573365 bytes OK
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991855) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.992984) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.992999) EVENT_LOG_v1 {"time_micros": 1769091797992993, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 869950, prev total WAL file size 869950, number of live WAL files 2.
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(559KB)], [90(11MB)]
Jan 22 14:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797993643, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 12902891, "oldest_snapshot_seqno": -1}
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8975 keys, 11173901 bytes, temperature: kUnknown
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798053075, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 11173901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11120117, "index_size": 30248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 241548, "raw_average_key_size": 26, "raw_value_size": 10962302, "raw_average_value_size": 1221, "num_data_blocks": 1156, "num_entries": 8975, "num_filter_entries": 8975, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.053556) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11173901 bytes
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.055411) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.6 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(42.0) write-amplify(19.5) OK, records in: 9485, records dropped: 510 output_compression: NoCompression
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.055442) EVENT_LOG_v1 {"time_micros": 1769091798055428, "job": 56, "event": "compaction_finished", "compaction_time_micros": 59569, "compaction_time_cpu_micros": 27763, "output_level": 6, "num_output_files": 1, "total_output_size": 11173901, "num_input_records": 9485, "num_output_records": 8975, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798055890, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798060504, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:18 compute-1 ceph-mon[81715]: pgmap v1657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:18 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:23:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:23:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:20 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:21 compute-1 ceph-mon[81715]: pgmap v1658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:21 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:21.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:22 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:22 compute-1 ceph-mon[81715]: pgmap v1659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:23 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:23 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:24 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:24 compute-1 ceph-mon[81715]: pgmap v1660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:25.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:25 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:26 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:26 compute-1 ceph-mon[81715]: pgmap v1661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:27 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:28 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:28 compute-1 ceph-mon[81715]: pgmap v1662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:29.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:29 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:30 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:30 compute-1 ceph-mon[81715]: pgmap v1663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:31 compute-1 podman[231311]: 2026-01-22 14:23:31.0935909 +0000 UTC m=+0.084165708 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:23:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:31.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:31 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:32 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:32 compute-1 ceph-mon[81715]: pgmap v1664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:32 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:33 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:34.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:34 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:34 compute-1 ceph-mon[81715]: pgmap v1665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:34 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:35.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:35 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:36 compute-1 ceph-mon[81715]: pgmap v1666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:36 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:37 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:37 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:38 compute-1 ceph-mon[81715]: pgmap v1667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:38 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:39 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:40 compute-1 ceph-mon[81715]: pgmap v1668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:40 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:41 compute-1 sudo[231338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:41 compute-1 sudo[231338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-1 sudo[231338]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-1 sudo[231363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:23:41 compute-1 sudo[231363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-1 sudo[231363]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:41 compute-1 sudo[231388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:41 compute-1 sudo[231388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-1 sudo[231388]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-1 sudo[231413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:23:41 compute-1 sudo[231413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:42 compute-1 sudo[231413]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:42 compute-1 ceph-mon[81715]: pgmap v1669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:42 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:42 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:43 compute-1 podman[231468]: 2026-01-22 14:23:43.07175197 +0000 UTC m=+0.060296710 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 14:23:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:44 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-1 ceph-mon[81715]: pgmap v1670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:23:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:23:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:45.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:45 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:46 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:46 compute-1 ceph-mon[81715]: pgmap v1671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.460 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:23:47 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:47 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:48 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:48 compute-1 ceph-mon[81715]: pgmap v1672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:49 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-1 sudo[231487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:50 compute-1 sudo[231487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:50 compute-1 sudo[231487]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:50 compute-1 sudo[231512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:23:50 compute-1 sudo[231512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:50 compute-1 sudo[231512]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:50 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-1 ceph-mon[81715]: pgmap v1673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:51 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:52 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:52 compute-1 ceph-mon[81715]: pgmap v1674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:53 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:53 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/830754578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:54 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:54 compute-1 ceph-mon[81715]: pgmap v1675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2625343517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:55 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:56 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:23:56.086 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:23:56 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:23:56.087 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:23:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:56 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:56 compute-1 ceph-mon[81715]: pgmap v1676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:57 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2827271453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:58 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:58 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2804744684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:58 compute-1 ceph-mon[81715]: pgmap v1677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:23:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:59 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:00.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:00 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:00 compute-1 ceph-mon[81715]: pgmap v1678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:01 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:01 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:02 compute-1 podman[231537]: 2026-01-22 14:24:02.124276892 +0000 UTC m=+0.110299438 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:24:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:02 compute-1 ceph-mon[81715]: pgmap v1679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:02 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:02 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:03.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:03 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:04 compute-1 ceph-mon[81715]: pgmap v1680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:04 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:05 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:06 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:24:06.090 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:24:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:06 compute-1 ceph-mon[81715]: pgmap v1681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:06 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:07 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:07 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:08 compute-1 ceph-mon[81715]: pgmap v1682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:08 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:10 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:11 compute-1 ceph-mon[81715]: pgmap v1683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:11 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:12 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:12 compute-1 ceph-mon[81715]: pgmap v1684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:13 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:13 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:14 compute-1 podman[231563]: 2026-01-22 14:24:14.058822195 +0000 UTC m=+0.053880365 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:24:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:14 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:14 compute-1 ceph-mon[81715]: pgmap v1685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:15 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:16 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:16 compute-1 ceph-mon[81715]: pgmap v1686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:17 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:18 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:18 compute-1 ceph-mon[81715]: pgmap v1687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/446750844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:24:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:24:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:19.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:19 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:20 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:20 compute-1 ceph-mon[81715]: pgmap v1688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 524 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 597 B/s rd, 475 KiB/s wr, 0 op/s
Jan 22 14:24:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:21 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:23 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:23 compute-1 ceph-mon[81715]: pgmap v1689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:23 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:24 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:24 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:24 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2112882646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:24 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1383228421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:25 compute-1 ceph-mon[81715]: pgmap v1690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:25 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:26 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:27 compute-1 ceph-mon[81715]: pgmap v1691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:27 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:28 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:28 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:29 compute-1 ceph-mon[81715]: pgmap v1692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:29 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:30 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:31 compute-1 ceph-mon[81715]: pgmap v1693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:31 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:31.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:32.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:32 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.324806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872324885, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1245, "num_deletes": 251, "total_data_size": 2151052, "memory_usage": 2174488, "flush_reason": "Manual Compaction"}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872335911, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 922025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48092, "largest_seqno": 49331, "table_properties": {"data_size": 917757, "index_size": 1664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13046, "raw_average_key_size": 21, "raw_value_size": 907787, "raw_average_value_size": 1500, "num_data_blocks": 72, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091798, "oldest_key_time": 1769091798, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 11133 microseconds, and 3779 cpu microseconds.
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335952) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 922025 bytes OK
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335969) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337223) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337236) EVENT_LOG_v1 {"time_micros": 1769091872337232, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2144918, prev total WAL file size 2144918, number of live WAL files 2.
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.338135) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(900KB)], [93(10MB)]
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872338169, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 12095926, "oldest_snapshot_seqno": -1}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 9096 keys, 8667682 bytes, temperature: kUnknown
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872397217, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 8667682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8617179, "index_size": 26647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 244746, "raw_average_key_size": 26, "raw_value_size": 8461301, "raw_average_value_size": 930, "num_data_blocks": 1006, "num_entries": 9096, "num_filter_entries": 9096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.398170) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8667682 bytes
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.399281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.6 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(22.5) write-amplify(9.4) OK, records in: 9580, records dropped: 484 output_compression: NoCompression
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.399304) EVENT_LOG_v1 {"time_micros": 1769091872399293, "job": 58, "event": "compaction_finished", "compaction_time_micros": 59111, "compaction_time_cpu_micros": 23373, "output_level": 6, "num_output_files": 1, "total_output_size": 8667682, "num_input_records": 9580, "num_output_records": 9096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872399682, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872402629, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.338075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:33 compute-1 podman[231583]: 2026-01-22 14:24:33.098551521 +0000 UTC m=+0.084582739 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:24:33 compute-1 ceph-mon[81715]: pgmap v1694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 22 14:24:33 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:33 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:33.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:34.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:34 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:24:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 9281 writes, 33K keys, 9281 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9281 writes, 2431 syncs, 3.82 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 889 writes, 2183 keys, 889 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s
                                           Interval WAL: 889 writes, 406 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:24:35 compute-1 ceph-mon[81715]: pgmap v1695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:24:35 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:36.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:36 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:37 compute-1 ceph-mon[81715]: pgmap v1696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:24:37 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:38.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:38 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:38 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:38 compute-1 ceph-mon[81715]: pgmap v1697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:24:39 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:40 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:40 compute-1 ceph-mon[81715]: pgmap v1698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:24:41 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:42 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:42 compute-1 ceph-mon[81715]: pgmap v1699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:43 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:43 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:44 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:44 compute-1 ceph-mon[81715]: pgmap v1700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:45 compute-1 podman[231609]: 2026-01-22 14:24:45.088869637 +0000 UTC m=+0.071812523 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 14:24:45 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:46 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:46 compute-1 ceph-mon[81715]: pgmap v1701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:47 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:48 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:48 compute-1 ceph-mon[81715]: pgmap v1702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:49 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:49.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:50 compute-1 sudo[231630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:50 compute-1 sudo[231630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-1 sudo[231630]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:50 compute-1 ceph-mon[81715]: pgmap v1703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:50 compute-1 sudo[231655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:24:50 compute-1 sudo[231655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-1 sudo[231655]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-1 sudo[231680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:50 compute-1 sudo[231680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-1 sudo[231680]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-1 sudo[231705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:24:50 compute-1 sudo[231705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:51 compute-1 podman[231800]: 2026-01-22 14:24:51.054549283 +0000 UTC m=+0.063774864 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:24:51 compute-1 podman[231800]: 2026-01-22 14:24:51.154312594 +0000 UTC m=+0.163538065 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 14:24:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:51 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:51 compute-1 sudo[231705]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:51 compute-1 sudo[231920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:51 compute-1 sudo[231920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:51 compute-1 sudo[231920]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:51 compute-1 sudo[231945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:24:51 compute-1 sudo[231945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:51 compute-1 sudo[231945]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:51 compute-1 sudo[231970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:51 compute-1 sudo[231970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:51 compute-1 sudo[231970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:52 compute-1 sudo[231995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:24:52 compute-1 sudo[231995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:52 compute-1 sudo[231995]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:52 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:52 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:52 compute-1 ceph-mon[81715]: pgmap v1704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:52 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:53.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:53 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:24:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:24:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:54 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:54 compute-1 ceph-mon[81715]: pgmap v1705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:55 compute-1 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3002855907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:56.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:56 compute-1 ceph-mon[81715]: pgmap v1706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:56 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/311209811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:57 compute-1 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:58 compute-1 sudo[232050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:58 compute-1 sudo[232050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:58 compute-1 sudo[232050]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:58 compute-1 sudo[232075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:24:58 compute-1 sudo[232075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:58 compute-1 sudo[232075]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:58 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:58 compute-1 ceph-mon[81715]: pgmap v1707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:24:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:59.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:59 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1660272869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:59 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1886356108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:00 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:00 compute-1 ceph-mon[81715]: pgmap v1708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:02 compute-1 ceph-mon[81715]: pgmap v1709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:03 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:03 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:04 compute-1 podman[232100]: 2026-01-22 14:25:04.106371972 +0000 UTC m=+0.096561786 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:25:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:04 compute-1 ceph-mon[81715]: pgmap v1710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:06 compute-1 ceph-mon[81715]: pgmap v1711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:08 compute-1 ceph-mon[81715]: pgmap v1712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:10 compute-1 ceph-mon[81715]: pgmap v1713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:25:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:11.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:25:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:12.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:13 compute-1 ceph-mon[81715]: pgmap v1714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:13 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:14.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:15 compute-1 ceph-mon[81715]: pgmap v1715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:15.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:16 compute-1 podman[232126]: 2026-01-22 14:25:16.05954569 +0000 UTC m=+0.050088331 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 14:25:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:16.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:17 compute-1 ceph-mon[81715]: pgmap v1716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:17 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:17.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:18 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:25:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:25:19 compute-1 ceph-mon[81715]: pgmap v1717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:25:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:25:19 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:19.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:21 compute-1 ceph-mon[81715]: pgmap v1718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:22.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:22 compute-1 ceph-mon[81715]: pgmap v1719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:23 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:23.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:24.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:24 compute-1 ceph-mon[81715]: pgmap v1720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:26.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:26 compute-1 ceph-mon[81715]: pgmap v1721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:27 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:28 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:28 compute-1 ceph-mon[81715]: pgmap v1722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:29 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:30 compute-1 ceph-mon[81715]: pgmap v1723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:31.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:25:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 9128 writes, 50K keys, 9128 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 9127 writes, 9127 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1807 writes, 9447 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s
                                           Interval WAL: 1806 writes, 1806 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.7      0.97              0.17        29    0.034       0      0       0.0       0.0
                                             L6      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    141.9    119.6      2.06              0.70        28    0.074    199K    15K       0.0       0.0
                                            Sum      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6     96.4     99.1      3.04              0.88        57    0.053    199K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    157.4    153.7      0.49              0.22        14    0.035     64K   3548       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    141.9    119.6      2.06              0.70        28    0.074    199K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.8      0.97              0.17        28    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.053, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.10 MB/s write, 0.29 GB read, 0.10 MB/s read, 3.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 31.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00021 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1661,30.10 MB,9.90208%) FilterBlock(57,569.98 KB,0.1831%) IndexBlock(57,805.67 KB,0.258812%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:25:31 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:32 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:32 compute-1 ceph-mon[81715]: pgmap v1724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:32 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:32 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:33 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:34 compute-1 ceph-mon[81715]: pgmap v1725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:34 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:35 compute-1 podman[232145]: 2026-01-22 14:25:35.135451029 +0000 UTC m=+0.121127343 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:25:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:35.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:36 compute-1 ceph-mon[81715]: pgmap v1726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:25:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:25:37 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:38.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:38 compute-1 ceph-mon[81715]: pgmap v1727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:38 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:39 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:40.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:40 compute-1 ceph-mon[81715]: pgmap v1728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:41.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:41 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:43 compute-1 ceph-mon[81715]: pgmap v1729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:43 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:43 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:44 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:45 compute-1 ceph-mon[81715]: pgmap v1730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:45 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:47 compute-1 podman[232172]: 2026-01-22 14:25:47.068170521 +0000 UTC m=+0.053776202 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 14:25:47 compute-1 ceph-mon[81715]: pgmap v1731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:47 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.463 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:25:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:47.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:48 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:48 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:49 compute-1 ceph-mon[81715]: pgmap v1732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:49.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:50 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:51 compute-1 ceph-mon[81715]: pgmap v1733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:51 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:51.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:52 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:52 compute-1 ceph-mon[81715]: pgmap v1734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:53 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:53.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:25:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:25:54 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:54 compute-1 ceph-mon[81715]: pgmap v1735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2696841303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:56 compute-1 ceph-mon[81715]: pgmap v1736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:57 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1661484102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:57.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:25:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:25:58 compute-1 sudo[232192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:58 compute-1 sudo[232192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-1 sudo[232192]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:58 compute-1 ceph-mon[81715]: pgmap v1737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:58 compute-1 sudo[232217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:25:58 compute-1 sudo[232217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-1 sudo[232217]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-1 sudo[232242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:58 compute-1 sudo[232242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-1 sudo[232242]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-1 sudo[232267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:25:58 compute-1 sudo[232267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:59 compute-1 sudo[232267]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:25:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:59.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:59 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3199945618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:25:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.831136) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959831186, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 251, "total_data_size": 2521159, "memory_usage": 2571232, "flush_reason": "Manual Compaction"}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959846455, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 1644723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49336, "largest_seqno": 50768, "table_properties": {"data_size": 1639072, "index_size": 2791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14729, "raw_average_key_size": 20, "raw_value_size": 1626619, "raw_average_value_size": 2303, "num_data_blocks": 120, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091873, "oldest_key_time": 1769091873, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15411 microseconds, and 6658 cpu microseconds.
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.846552) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 1644723 bytes OK
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.846575) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853859) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853911) EVENT_LOG_v1 {"time_micros": 1769091959853900, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2514245, prev total WAL file size 2514245, number of live WAL files 2.
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.855124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(1606KB)], [96(8464KB)]
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959855209, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 10312405, "oldest_snapshot_seqno": -1}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 9285 keys, 8614300 bytes, temperature: kUnknown
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959901542, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 8614300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8562886, "index_size": 27110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 249905, "raw_average_key_size": 26, "raw_value_size": 8403895, "raw_average_value_size": 905, "num_data_blocks": 1020, "num_entries": 9285, "num_filter_entries": 9285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901904) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8614300 bytes
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.903553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 9802, records dropped: 517 output_compression: NoCompression
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.903572) EVENT_LOG_v1 {"time_micros": 1769091959903563, "job": 60, "event": "compaction_finished", "compaction_time_micros": 46491, "compaction_time_cpu_micros": 24816, "output_level": 6, "num_output_files": 1, "total_output_size": 8614300, "num_input_records": 9802, "num_output_records": 9285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959903973, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959905530, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.855023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:26:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:00 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:26:00 compute-1 ceph-mon[81715]: pgmap v1738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/226630284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:01.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:02 compute-1 ceph-mon[81715]: pgmap v1739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:03 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:03 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:03.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:04 compute-1 ceph-mon[81715]: pgmap v1740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:05 compute-1 sudo[232322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:05 compute-1 sudo[232322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:05 compute-1 sudo[232322]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:05 compute-1 sudo[232353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:26:05 compute-1 sudo[232353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:05 compute-1 sudo[232353]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:05 compute-1 podman[232346]: 2026-01-22 14:26:05.587583656 +0000 UTC m=+0.080577361 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:26:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:05.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:06 compute-1 ceph-mon[81715]: pgmap v1741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:07.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:08 compute-1 ceph-mon[81715]: pgmap v1742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:10 compute-1 ceph-mon[81715]: pgmap v1743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:11 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:12 compute-1 ceph-mon[81715]: pgmap v1744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:12 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:13.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:14 compute-1 ceph-mon[81715]: pgmap v1745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:16 compute-1 ceph-mon[81715]: pgmap v1746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:17 compute-1 sshd-session[232398]: Invalid user ubnt from 45.148.10.121 port 59990
Jan 22 14:26:18 compute-1 podman[232400]: 2026-01-22 14:26:18.064304794 +0000 UTC m=+0.065937794 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:26:18 compute-1 sshd-session[232398]: Connection closed by invalid user ubnt 45.148.10.121 port 59990 [preauth]
Jan 22 14:26:18 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:19 compute-1 ceph-mon[81715]: pgmap v1747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:26:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:26:19 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:19.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:21 compute-1 ceph-mon[81715]: pgmap v1748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:21 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:21.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:22.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:23 compute-1 ceph-mon[81715]: pgmap v1749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:24 compute-1 ceph-mon[81715]: pgmap v1750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:26 compute-1 ceph-mon[81715]: pgmap v1751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:27 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:27 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:29 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:29 compute-1 ceph-mon[81715]: pgmap v1752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:31 compute-1 ceph-mon[81715]: pgmap v1753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:31 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:32 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:32.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:33 compute-1 ceph-mon[81715]: pgmap v1754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:33 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:33 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:34 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:35 compute-1 ceph-mon[81715]: pgmap v1755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:35.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:36 compute-1 podman[232420]: 2026-01-22 14:26:36.143608368 +0000 UTC m=+0.129149141 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:26:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:37 compute-1 ceph-mon[81715]: pgmap v1756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:38 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:38 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:39 compute-1 ceph-mon[81715]: pgmap v1757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:39 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:39.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:41 compute-1 ceph-mon[81715]: pgmap v1758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:41 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:42 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:43 compute-1 ceph-mon[81715]: pgmap v1759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:43 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:43 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:44 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:44 compute-1 ceph-mon[81715]: pgmap v1760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:45 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:46 compute-1 ceph-mon[81715]: pgmap v1761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:26:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:47 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:47 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:48 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:48 compute-1 ceph-mon[81715]: pgmap v1762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:48 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:49 compute-1 podman[232446]: 2026-01-22 14:26:49.0587265 +0000 UTC m=+0.051587823 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:26:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:51 compute-1 ceph-mon[81715]: pgmap v1763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:51 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:52 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:53 compute-1 ceph-mon[81715]: pgmap v1764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:53 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:53 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:53.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:54 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:55 compute-1 ceph-mon[81715]: pgmap v1765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:55.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:56 compute-1 ceph-mon[81715]: pgmap v1766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/259508654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:57 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:26:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:26:58 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:58 compute-1 ceph-mon[81715]: pgmap v1767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:58 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/815054871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:26:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:00.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:00 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:27:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1362540640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:00 compute-1 ceph-mon[81715]: pgmap v1768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:01 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3394215233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:02 compute-1 ceph-mon[81715]: pgmap v1769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 14:27:03 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:03 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:04 compute-1 ceph-mon[81715]: pgmap v1770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 14:27:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:05 compute-1 sudo[232463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:05 compute-1 sudo[232463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-1 sudo[232463]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-1 sudo[232488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:27:05 compute-1 sudo[232488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-1 sudo[232488]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-1 sudo[232513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:05 compute-1 sudo[232513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-1 sudo[232513]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-1 sudo[232538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:27:05 compute-1 sudo[232538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-1 sudo[232538]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:06 compute-1 sudo[232608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:06 compute-1 podman[232584]: 2026-01-22 14:27:06.339420119 +0000 UTC m=+0.161614253 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:27:06 compute-1 sudo[232608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-1 sudo[232608]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:06 compute-1 sudo[232636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:27:06 compute-1 sudo[232636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-1 sudo[232636]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:06.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:06 compute-1 sudo[232662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:06 compute-1 sudo[232662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-1 sudo[232662]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:06 compute-1 sudo[232687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:27:06 compute-1 sudo[232687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:06 compute-1 ceph-mon[81715]: pgmap v1771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:07 compute-1 sudo[232687]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.371354) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027371411, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1161, "num_deletes": 256, "total_data_size": 1970462, "memory_usage": 1996448, "flush_reason": "Manual Compaction"}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027380313, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50773, "largest_seqno": 51929, "table_properties": {"data_size": 1289998, "index_size": 2212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12371, "raw_average_key_size": 20, "raw_value_size": 1279428, "raw_average_value_size": 2100, "num_data_blocks": 95, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091960, "oldest_key_time": 1769091960, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 8994 microseconds, and 4142 cpu microseconds.
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380362) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1294814 bytes OK
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380385) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381578) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381594) EVENT_LOG_v1 {"time_micros": 1769092027381589, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1964647, prev total WAL file size 1964647, number of live WAL files 2.
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.382277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1264KB)], [99(8412KB)]
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027382349, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 9909114, "oldest_snapshot_seqno": -1}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 9367 keys, 9739902 bytes, temperature: kUnknown
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027429178, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 9739902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9686831, "index_size": 28575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 252989, "raw_average_key_size": 27, "raw_value_size": 9525152, "raw_average_value_size": 1016, "num_data_blocks": 1078, "num_entries": 9367, "num_filter_entries": 9367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.429435) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9739902 bytes
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.430876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.2 rd, 207.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 9894, records dropped: 527 output_compression: NoCompression
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.430891) EVENT_LOG_v1 {"time_micros": 1769092027430883, "job": 62, "event": "compaction_finished", "compaction_time_micros": 46913, "compaction_time_cpu_micros": 23364, "output_level": 6, "num_output_files": 1, "total_output_size": 9739902, "num_input_records": 9894, "num_output_records": 9367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027431336, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027432997, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.382161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:07.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:27:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:27:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:08.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:08 compute-1 ceph-mon[81715]: pgmap v1772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:09.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:10 compute-1 ceph-mon[81715]: pgmap v1773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:11 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:12 compute-1 ceph-mon[81715]: pgmap v1774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Jan 22 14:27:12 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:13 compute-1 sudo[232743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:13 compute-1 sudo[232743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:13 compute-1 sudo[232743]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:13 compute-1 sudo[232768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:27:13 compute-1 sudo[232768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:13 compute-1 sudo[232768]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:14.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:14.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:15 compute-1 ceph-mon[81715]: pgmap v1775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 14:27:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:16.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:16.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:17 compute-1 ceph-mon[81715]: pgmap v1776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 14:27:17 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:17 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:27:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:27:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:27:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:27:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:18 compute-1 ceph-mon[81715]: pgmap v1777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:27:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:27:19 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:20 compute-1 podman[232793]: 2026-01-22 14:27:20.07873668 +0000 UTC m=+0.065817069 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:27:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:20 compute-1 ceph-mon[81715]: pgmap v1778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:23 compute-1 ceph-mon[81715]: pgmap v1779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:23 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:27:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:27:25 compute-1 ceph-mon[81715]: pgmap v1780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:26.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:27 compute-1 ceph-mon[81715]: pgmap v1781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:27 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:28 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:28 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:29 compute-1 ceph-mon[81715]: pgmap v1782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:29 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:30 compute-1 ceph-mon[81715]: pgmap v1783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:31 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:32 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:32 compute-1 ceph-mon[81715]: pgmap v1784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:33 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:33 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:34 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:34 compute-1 ceph-mon[81715]: pgmap v1785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:36 compute-1 ceph-mon[81715]: pgmap v1786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:37 compute-1 podman[232813]: 2026-01-22 14:27:37.097306735 +0000 UTC m=+0.083971294 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:27:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:38 compute-1 ceph-mon[81715]: pgmap v1787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:38 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:40.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:41 compute-1 ceph-mon[81715]: pgmap v1788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:41 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:42 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:42.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:43 compute-1 ceph-mon[81715]: pgmap v1789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:43 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:43 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:27:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:27:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:44 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:44 compute-1 ceph-mon[81715]: pgmap v1790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:45 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:46 compute-1 ceph-mon[81715]: pgmap v1791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:27:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:48 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:48 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:48.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:49 compute-1 ceph-mon[81715]: pgmap v1792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:51 compute-1 podman[232841]: 2026-01-22 14:27:51.067646637 +0000 UTC m=+0.051661475 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 14:27:51 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:27:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:52 compute-1 ceph-mon[81715]: pgmap v1793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:52 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-1 ceph-mon[81715]: pgmap v1794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 14:27:54 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:54 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:55 compute-1 ceph-mon[81715]: pgmap v1795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.8 KiB/s rd, 341 B/s wr, 9 op/s
Jan 22 14:27:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2715672213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:56 compute-1 ceph-mon[81715]: pgmap v1796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:27:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:27:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:59 compute-1 ceph-mon[81715]: pgmap v1797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:28:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:28:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:28:00 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3252584016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:00 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:00 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/433983523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:01 compute-1 ceph-mon[81715]: pgmap v1798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:28:01 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/397044635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:02 compute-1 ceph-mon[81715]: pgmap v1799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:28:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/668164319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:02 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:04.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:05 compute-1 ceph-mon[81715]: pgmap v1800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 14:28:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:06 compute-1 ceph-mon[81715]: pgmap v1801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 14:28:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:08 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:08 compute-1 podman[232860]: 2026-01-22 14:28:08.147572449 +0000 UTC m=+0.137239881 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:28:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:09 compute-1 ceph-mon[81715]: pgmap v1802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 14:28:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:10.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:11 compute-1 ceph-mon[81715]: pgmap v1803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 683 KiB/s rd, 1 op/s
Jan 22 14:28:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:12.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-1 ceph-mon[81715]: pgmap v1804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:28:13 compute-1 sudo[232888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:13 compute-1 sudo[232888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-1 sudo[232888]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-1 sudo[232913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:28:13 compute-1 sudo[232913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-1 sudo[232913]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-1 sudo[232938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:13 compute-1 sudo[232938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-1 sudo[232938]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-1 sudo[232963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:28:13 compute-1 sudo[232963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:14 compute-1 sudo[232963]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:14.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:14.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:15 compute-1 ceph-mon[81715]: pgmap v1805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:28:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:28:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:28:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:16.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:16.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-1 ceph-mon[81715]: pgmap v1806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 39 op/s
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:28:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:28:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:18 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:28:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:28:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:28:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:28:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:18.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:19 compute-1 ceph-mon[81715]: pgmap v1807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 22 14:28:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:28:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:28:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:20 compute-1 ceph-mon[81715]: pgmap v1808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 22 14:28:22 compute-1 podman[233020]: 2026-01-22 14:28:22.072416414 +0000 UTC m=+0.061450441 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:28:22 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.087 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:28:22 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.088 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:28:22 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.089 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:22 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2632739966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:22.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:23 compute-1 ceph-mon[81715]: pgmap v1809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 14:28:23 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:24 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4267242123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:25 compute-1 ceph-mon[81715]: pgmap v1810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:28:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:25 compute-1 sudo[233039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:25 compute-1 sudo[233039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:25 compute-1 sudo[233039]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:25 compute-1 sudo[233064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:28:25 compute-1 sudo[233064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:25 compute-1 sudo[233064]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:26.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:26 compute-1 ceph-mon[81715]: pgmap v1811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:28:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:27 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1420612954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:28 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:28 compute-1 ceph-mon[81715]: pgmap v1812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 586 KiB/s wr, 2 op/s
Jan 22 14:28:29 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:30.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:30 compute-1 ceph-mon[81715]: pgmap v1813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 587 KiB/s wr, 6 op/s
Jan 22 14:28:31 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:32 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:32 compute-1 ceph-mon[81715]: pgmap v1814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 671 KiB/s rd, 13 KiB/s wr, 31 op/s
Jan 22 14:28:32 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:33 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:34.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:34.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:35 compute-1 ceph-mon[81715]: pgmap v1815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 22 14:28:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:36 compute-1 ceph-mon[81715]: pgmap v1816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:37 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:39 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:39 compute-1 ceph-mon[81715]: pgmap v1817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:39 compute-1 podman[233089]: 2026-01-22 14:28:39.11788961 +0000 UTC m=+0.100559204 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:28:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2342146323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:40.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:41 compute-1 ceph-mon[81715]: pgmap v1818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:41 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:42.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:42.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:42 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:42 compute-1 ceph-mon[81715]: pgmap v1819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 22 14:28:43 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:43 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:44.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:44 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:44 compute-1 ceph-mon[81715]: pgmap v1820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 22 14:28:45 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:46 compute-1 ceph-mon[81715]: pgmap v1821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 22 14:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:28:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:47 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:49 compute-1 ceph-mon[81715]: pgmap v1822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.644533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129644573, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1559, "num_deletes": 251, "total_data_size": 3044666, "memory_usage": 3088896, "flush_reason": "Manual Compaction"}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129656692, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1979886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51934, "largest_seqno": 53488, "table_properties": {"data_size": 1973541, "index_size": 3356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16179, "raw_average_key_size": 21, "raw_value_size": 1959839, "raw_average_value_size": 2561, "num_data_blocks": 145, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092027, "oldest_key_time": 1769092027, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 12193 microseconds, and 5667 cpu microseconds.
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.656726) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1979886 bytes OK
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.656744) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657926) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657936) EVENT_LOG_v1 {"time_micros": 1769092129657933, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3037162, prev total WAL file size 3037162, number of live WAL files 2.
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.658741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1933KB)], [102(9511KB)]
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129658816, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 11719788, "oldest_snapshot_seqno": -1}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 9615 keys, 10083242 bytes, temperature: kUnknown
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129727407, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 10083242, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10028388, "index_size": 29718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 259598, "raw_average_key_size": 26, "raw_value_size": 9862288, "raw_average_value_size": 1025, "num_data_blocks": 1122, "num_entries": 9615, "num_filter_entries": 9615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.727786) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10083242 bytes
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.729180) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.6 rd, 146.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10132, records dropped: 517 output_compression: NoCompression
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.729200) EVENT_LOG_v1 {"time_micros": 1769092129729188, "job": 64, "event": "compaction_finished", "compaction_time_micros": 68695, "compaction_time_cpu_micros": 25758, "output_level": 6, "num_output_files": 1, "total_output_size": 10083242, "num_input_records": 10132, "num_output_records": 9615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129729761, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129731314, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.658626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:50.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:50 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:50 compute-1 ceph-mon[81715]: pgmap v1823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:51 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:51 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:52.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:52 compute-1 ceph-mon[81715]: pgmap v1824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 22 14:28:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3810589282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:52 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:53 compute-1 podman[233116]: 2026-01-22 14:28:53.065801891 +0000 UTC m=+0.056501397 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:28:53 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/820216801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:55 compute-1 ceph-mon[81715]: pgmap v1825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:56.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:56.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:56 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:56 compute-1 ceph-mon[81715]: pgmap v1826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:57 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:58.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:28:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:58.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:58 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:58 compute-1 ceph-mon[81715]: pgmap v1827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 14:28:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:28:58 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3247440006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:59 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3247440006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:29:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:00.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:00.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-1 ceph-mon[81715]: pgmap v1828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 14:29:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2544514787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1270814979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:02.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:03 compute-1 ceph-mon[81715]: pgmap v1829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Jan 22 14:29:03 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3776743246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2182985443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:04.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:05 compute-1 ceph-mon[81715]: pgmap v1830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:29:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:07 compute-1 ceph-mon[81715]: pgmap v1831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:08 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:09 compute-1 ceph-mon[81715]: pgmap v1832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:10 compute-1 podman[233136]: 2026-01-22 14:29:10.135034462 +0000 UTC m=+0.116524487 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 14:29:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:10 compute-1 ceph-mon[81715]: pgmap v1833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:11 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:12 compute-1 ceph-mon[81715]: pgmap v1834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:12 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3324759149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:13 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:13 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/731787681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:14 compute-1 ceph-mon[81715]: pgmap v1835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:15 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:15 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:16 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:16.337 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:29:16 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:16.339 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:29:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:16 compute-1 ceph-mon[81715]: pgmap v1836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Jan 22 14:29:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:17 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:29:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:29:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:18.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:18 compute-1 ceph-mon[81715]: pgmap v1837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 938 B/s wr, 27 op/s
Jan 22 14:29:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:19 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:20 compute-1 ceph-mon[81715]: pgmap v1838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:21 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:22.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:22 compute-1 ceph-mon[81715]: pgmap v1839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:22 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:24 compute-1 podman[233162]: 2026-01-22 14:29:24.064405761 +0000 UTC m=+0.051340467 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:29:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:24.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:24.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:25 compute-1 ceph-mon[81715]: pgmap v1840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:26 compute-1 sudo[233184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:26 compute-1 sudo[233184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-1 sudo[233184]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-1 sudo[233209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:29:26 compute-1 sudo[233209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-1 sudo[233209]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-1 sudo[233234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:26 compute-1 sudo[233234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-1 sudo[233234]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-1 sudo[233259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:29:26 compute-1 sudo[233259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:26.341 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:29:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:26.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:26 compute-1 ceph-mon[81715]: pgmap v1841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:26 compute-1 sudo[233259]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:27 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:29:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:29:27 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:28 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:28 compute-1 ceph-mon[81715]: pgmap v1842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 14:29:29 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:30.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:30 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:30 compute-1 ceph-mon[81715]: pgmap v1843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 14:29:31 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:32 compute-1 sudo[233315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:32 compute-1 sudo[233315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:32 compute-1 sudo[233315]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:32 compute-1 sudo[233340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:29:32 compute-1 sudo[233340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:32 compute-1 sudo[233340]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:33 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:33 compute-1 ceph-mon[81715]: pgmap v1844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:34 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:34 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:35 compute-1 ceph-mon[81715]: pgmap v1845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:35 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:36 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:37 compute-1 ceph-mon[81715]: pgmap v1846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:38 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:38.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:38.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:39 compute-1 ceph-mon[81715]: pgmap v1847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:39 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:40 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:41 compute-1 podman[233365]: 2026-01-22 14:29:41.139298866 +0000 UTC m=+0.122310255 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:29:41 compute-1 ceph-mon[81715]: pgmap v1848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:41 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:42 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:42 compute-1 ceph-mon[81715]: pgmap v1849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:42.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:43 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:43 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:44 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:44 compute-1 ceph-mon[81715]: pgmap v1850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:45 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:46.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:46 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:46 compute-1 ceph-mon[81715]: pgmap v1851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:47 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:47 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:48 compute-1 ceph-mon[81715]: pgmap v1852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:49 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:50 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:50 compute-1 ceph-mon[81715]: pgmap v1853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:51 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:52 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:52 compute-1 ceph-mon[81715]: pgmap v1854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:53 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:53 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:54 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:54 compute-1 ceph-mon[81715]: pgmap v1855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:55 compute-1 podman[233393]: 2026-01-22 14:29:55.057750086 +0000 UTC m=+0.047936873 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 14:29:55 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:56.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:56 compute-1 ceph-mon[81715]: pgmap v1856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:57 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:29:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:29:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:29:58 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:58 compute-1 ceph-mon[81715]: pgmap v1857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:59 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 14:30:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 14:30:00 compute-1 ceph-mon[81715]: pgmap v1858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:01 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1158373973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:01 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:02 compute-1 ceph-mon[81715]: pgmap v1859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1940122581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:02 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:02 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/901878159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:03 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:04 compute-1 ceph-mon[81715]: pgmap v1860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:04 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3235649745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:04 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:07 compute-1 ceph-mon[81715]: pgmap v1861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:07 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:07.244 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:30:07 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:07.245 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:30:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:08 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:08.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:09 compute-1 ceph-mon[81715]: pgmap v1862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:11 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:11 compute-1 ceph-mon[81715]: pgmap v1863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:11.247 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:30:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:12 compute-1 podman[233412]: 2026-01-22 14:30:12.143607719 +0000 UTC m=+0.115078818 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:30:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:13 compute-1 ceph-mon[81715]: pgmap v1864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:13 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:14.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:15 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:15 compute-1 ceph-mon[81715]: pgmap v1865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:17 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:17 compute-1 ceph-mon[81715]: pgmap v1866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:18 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:18 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:19 compute-1 ceph-mon[81715]: pgmap v1867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:30:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:30:20 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:20.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:21 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:21 compute-1 ceph-mon[81715]: pgmap v1868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:22 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:23 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:23 compute-1 ceph-mon[81715]: pgmap v1869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:23 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:24 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:24.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:25 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:25 compute-1 ceph-mon[81715]: pgmap v1870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:26 compute-1 podman[233437]: 2026-01-22 14:30:26.067834808 +0000 UTC m=+0.061099242 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 14:30:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:27 compute-1 ceph-mon[81715]: pgmap v1871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:28 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:28 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:29 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:29 compute-1 ceph-mon[81715]: pgmap v1872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:30 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:30 compute-1 ceph-mon[81715]: pgmap v1873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:31 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-1 ceph-mon[81715]: pgmap v1874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:33 compute-1 sudo[233457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:33 compute-1 sudo[233457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-1 sudo[233457]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-1 sudo[233482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:30:33 compute-1 sudo[233482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-1 sudo[233482]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-1 sudo[233507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:33 compute-1 sudo[233507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-1 sudo[233507]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-1 sudo[233532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:30:33 compute-1 sudo[233532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:33 compute-1 sudo[233532]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:34 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:30:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:30:34 compute-1 ceph-mon[81715]: pgmap v1875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:35 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:36 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:36 compute-1 ceph-mon[81715]: pgmap v1876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:37 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:38 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:38 compute-1 ceph-mon[81715]: pgmap v1877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:39 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:40 compute-1 sudo[233589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:40 compute-1 sudo[233589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:40 compute-1 sudo[233589]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:40 compute-1 sudo[233614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:30:40 compute-1 sudo[233614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:40 compute-1 sudo[233614]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:40.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:40 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:40 compute-1 ceph-mon[81715]: pgmap v1878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:40 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:41 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:42.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:42 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:42 compute-1 ceph-mon[81715]: pgmap v1879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:42 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:43 compute-1 podman[233639]: 2026-01-22 14:30:43.108082462 +0000 UTC m=+0.100060471 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:30:43 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:44.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:44 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:44 compute-1 ceph-mon[81715]: pgmap v1880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:45 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:46 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:46 compute-1 ceph-mon[81715]: pgmap v1881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:30:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:47 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:47 compute-1 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:48 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:48 compute-1 ceph-mon[81715]: pgmap v1882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:49 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:50 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:50 compute-1 ceph-mon[81715]: pgmap v1883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:51 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:52 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:52 compute-1 ceph-mon[81715]: pgmap v1884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:53 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:53 compute-1 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:54 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:54 compute-1 ceph-mon[81715]: pgmap v1885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:55 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:30:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:30:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:56 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:56 compute-1 ceph-mon[81715]: pgmap v1886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:57 compute-1 podman[233666]: 2026-01-22 14:30:57.082383321 +0000 UTC m=+0.066492959 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:30:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:57 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:57 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3697023630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:30:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:58 compute-1 ceph-mon[81715]: pgmap v1887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:58 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:59 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:30:59 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/368227021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:00 compute-1 ceph-mon[81715]: pgmap v1888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 527 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 291 KiB/s wr, 12 op/s
Jan 22 14:31:00 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:31:01 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/475015090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:02 compute-1 ceph-mon[81715]: pgmap v1889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 14:31:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/141403899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:31:02 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:02 compute-1 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/59182803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:03 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:04.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:04 compute-1 ceph-mon[81715]: pgmap v1890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 14:31:04 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:06 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:07 compute-1 ceph-mon[81715]: pgmap v1891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:07 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:08 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:08 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:08.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:09 compute-1 ceph-mon[81715]: pgmap v1892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:09 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:10 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:10.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:11 compute-1 ceph-mon[81715]: pgmap v1893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:11 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:12.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:12 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:13 compute-1 ceph-mon[81715]: pgmap v1894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.0 MiB/s wr, 30 op/s
Jan 22 14:31:13 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:14 compute-1 podman[233685]: 2026-01-22 14:31:14.094777177 +0000 UTC m=+0.088829945 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:31:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:14.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:14 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:14 compute-1 ceph-mon[81715]: pgmap v1895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 14:31:15 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:16 compute-1 ceph-mon[81715]: pgmap v1896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 14:31:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:17 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:17 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:31:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:31:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:31:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:31:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:18.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:18 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:18 compute-1 ceph-mon[81715]: pgmap v1897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:31:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:31:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:20 compute-1 ceph-mon[81715]: pgmap v1898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:20 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:21 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:22.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:22 compute-1 ceph-mon[81715]: pgmap v1899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:22 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:24 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:24 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:24.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:25 compute-1 ceph-mon[81715]: pgmap v1900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:25 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:26.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:27 compute-1 ceph-mon[81715]: pgmap v1901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:28 compute-1 podman[233711]: 2026-01-22 14:31:28.064572314 +0000 UTC m=+0.050079412 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 22 14:31:28 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:28.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:29 compute-1 ceph-mon[81715]: pgmap v1902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:29 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:30 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:30.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:31 compute-1 ceph-mon[81715]: pgmap v1903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:31 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:32 compute-1 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:31:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:32.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:33 compute-1 ceph-mon[81715]: pgmap v1904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:33 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:33 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:34 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:34.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:34.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:35 compute-1 ceph-mon[81715]: pgmap v1905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:35 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:36 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:36.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:37 compute-1 ceph-mon[81715]: pgmap v1906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:37 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.291618) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297291649, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2441, "num_deletes": 251, "total_data_size": 4671704, "memory_usage": 4731520, "flush_reason": "Manual Compaction"}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297311192, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 3057164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53493, "largest_seqno": 55929, "table_properties": {"data_size": 3048175, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23266, "raw_average_key_size": 21, "raw_value_size": 3028198, "raw_average_value_size": 2778, "num_data_blocks": 222, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092130, "oldest_key_time": 1769092130, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 19612 microseconds, and 8267 cpu microseconds.
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.311227) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 3057164 bytes OK
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.311245) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312434) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312445) EVENT_LOG_v1 {"time_micros": 1769092297312442, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 4660582, prev total WAL file size 4660582, number of live WAL files 2.
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.313465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2985KB)], [105(9846KB)]
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297313519, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13140406, "oldest_snapshot_seqno": -1}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 10190 keys, 11564234 bytes, temperature: kUnknown
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297366746, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 11564234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11504810, "index_size": 32816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273402, "raw_average_key_size": 26, "raw_value_size": 11327818, "raw_average_value_size": 1111, "num_data_blocks": 1246, "num_entries": 10190, "num_filter_entries": 10190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.367011) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11564234 bytes
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.368052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.4 rd, 216.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 10705, records dropped: 515 output_compression: NoCompression
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.368070) EVENT_LOG_v1 {"time_micros": 1769092297368061, "job": 66, "event": "compaction_finished", "compaction_time_micros": 53325, "compaction_time_cpu_micros": 27000, "output_level": 6, "num_output_files": 1, "total_output_size": 11564234, "num_input_records": 10705, "num_output_records": 10190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297368727, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297370193, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.313394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:38 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:38 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:38.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:39 compute-1 ceph-mon[81715]: pgmap v1907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:39 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:40 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:40 compute-1 sudo[233731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:40 compute-1 sudo[233731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-1 sudo[233731]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-1 sudo[233756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:31:40 compute-1 sudo[233756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-1 sudo[233756]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-1 sudo[233781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:40 compute-1 sudo[233781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:40 compute-1 sudo[233781]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-1 sudo[233806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:31:40 compute-1 sudo[233806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:41 compute-1 sudo[233806]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:41 compute-1 ceph-mon[81715]: pgmap v1908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:41 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:42 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.464115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302464174, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 258, "total_data_size": 193899, "memory_usage": 201960, "flush_reason": "Manual Compaction"}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302467048, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 127264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55934, "largest_seqno": 56278, "table_properties": {"data_size": 125158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5510, "raw_average_key_size": 18, "raw_value_size": 120743, "raw_average_value_size": 397, "num_data_blocks": 12, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092298, "oldest_key_time": 1769092298, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2952 microseconds, and 1030 cpu microseconds.
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467079) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 127264 bytes OK
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467091) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469360) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469371) EVENT_LOG_v1 {"time_micros": 1769092302469367, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469386) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 191450, prev total WAL file size 191450, number of live WAL files 2.
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(124KB)], [108(11MB)]
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302469707, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 11691498, "oldest_snapshot_seqno": -1}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 9967 keys, 11552426 bytes, temperature: kUnknown
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302524569, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11552426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11494045, "index_size": 32349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 269773, "raw_average_key_size": 27, "raw_value_size": 11320386, "raw_average_value_size": 1135, "num_data_blocks": 1223, "num_entries": 9967, "num_filter_entries": 9967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.525107) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11552426 bytes
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.527041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.8 rd, 209.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(182.6) write-amplify(90.8) OK, records in: 10494, records dropped: 527 output_compression: NoCompression
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.527065) EVENT_LOG_v1 {"time_micros": 1769092302527054, "job": 68, "event": "compaction_finished", "compaction_time_micros": 55204, "compaction_time_cpu_micros": 25359, "output_level": 6, "num_output_files": 1, "total_output_size": 11552426, "num_input_records": 10494, "num_output_records": 9967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302527751, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302530763, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:43 compute-1 ceph-mon[81715]: pgmap v1909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:43 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:43 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:44 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:45 compute-1 podman[233862]: 2026-01-22 14:31:45.098932348 +0000 UTC m=+0.088247559 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 14:31:45 compute-1 ceph-mon[81715]: pgmap v1910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:45 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:46 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:47 compute-1 sudo[233888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:47 compute-1 sudo[233888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:47 compute-1 sudo[233888]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:47 compute-1 sudo[233913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:31:47 compute-1 sudo[233913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:47 compute-1 sudo[233913]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:47 compute-1 ceph-mon[81715]: pgmap v1911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:47 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:31:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:48 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:48 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:48 compute-1 ceph-mon[81715]: pgmap v1912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:49 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:50 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:50 compute-1 ceph-mon[81715]: pgmap v1913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:51 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:52 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:52 compute-1 ceph-mon[81715]: pgmap v1914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:53 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:53 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:54 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:54 compute-1 ceph-mon[81715]: pgmap v1915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:54.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:31:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:31:55 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:56 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:56 compute-1 ceph-mon[81715]: pgmap v1916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:56.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:57 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:58 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:58 compute-1 ceph-mon[81715]: pgmap v1917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:58.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:31:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:58.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:59 compute-1 podman[233938]: 2026-01-22 14:31:59.050382347 +0000 UTC m=+0.042510727 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:31:59 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:00 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:00 compute-1 ceph-mon[81715]: pgmap v1918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:32:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:00.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:01 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:02 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:02 compute-1 ceph-mon[81715]: pgmap v1919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:32:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4247879608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:02 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:02.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:03 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:03 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1801557705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:04 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:04 compute-1 ceph-mon[81715]: pgmap v1920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:04.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:05 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:06 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:06 compute-1 ceph-mon[81715]: pgmap v1921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:07 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:07 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:08 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:08 compute-1 ceph-mon[81715]: pgmap v1922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:09 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:10 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:10 compute-1 ceph-mon[81715]: pgmap v1923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:11 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:12 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:12 compute-1 ceph-mon[81715]: pgmap v1924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:12.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:13 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:13 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:14 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:14 compute-1 ceph-mon[81715]: pgmap v1925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:14.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:15 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:16 compute-1 podman[233957]: 2026-01-22 14:32:16.116605818 +0000 UTC m=+0.107092645 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 14:32:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:16.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:16 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:16 compute-1 ceph-mon[81715]: pgmap v1926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:17 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:18.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:18 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:18 compute-1 ceph-mon[81715]: pgmap v1927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:19 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:20.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:20 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:20 compute-1 ceph-mon[81715]: pgmap v1928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:22 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:23 compute-1 ceph-mon[81715]: pgmap v1929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:23 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:23 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:24 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:24.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:25 compute-1 ceph-mon[81715]: pgmap v1930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:25 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:26 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:26.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:26.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:27 compute-1 ceph-mon[81715]: pgmap v1931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:27 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:28 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:28 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:29 compute-1 ceph-mon[81715]: pgmap v1932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:29 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:29 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3871583424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:30 compute-1 podman[233985]: 2026-01-22 14:32:30.091492322 +0000 UTC m=+0.072499124 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:32:30 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1888876612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:31 compute-1 ceph-mon[81715]: pgmap v1933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:31 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2091226058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:32 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:33 compute-1 ceph-mon[81715]: pgmap v1934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 14:32:33 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:33 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:34 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:35 compute-1 ceph-mon[81715]: pgmap v1935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 14:32:35 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:36 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:37 compute-1 ceph-mon[81715]: pgmap v1936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:37 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:38 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:38 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:38 compute-1 ceph-mon[81715]: pgmap v1937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:39 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:40 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3676994071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:40 compute-1 ceph-mon[81715]: pgmap v1938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:41 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:42 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:42 compute-1 ceph-mon[81715]: pgmap v1939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:43 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:43 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:44 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:44 compute-1 ceph-mon[81715]: pgmap v1940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 76 op/s
Jan 22 14:32:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:45 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:32:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:46.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:46 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:46 compute-1 ceph-mon[81715]: pgmap v1941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 95 op/s
Jan 22 14:32:47 compute-1 podman[234004]: 2026-01-22 14:32:47.096214226 +0000 UTC m=+0.085184114 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.469 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.470 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.470 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:47 compute-1 sudo[234030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:47 compute-1 sudo[234030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-1 sudo[234030]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-1 sudo[234055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:32:47 compute-1 sudo[234055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-1 sudo[234055]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:47 compute-1 sudo[234080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:47 compute-1 sudo[234080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-1 sudo[234080]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-1 sudo[234105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:32:47 compute-1 sudo[234105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:48 compute-1 sudo[234105]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:48.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:48 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:48 compute-1 ceph-mon[81715]: pgmap v1942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:32:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:32:48 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:49 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:50.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:50 compute-1 ceph-mon[81715]: pgmap v1943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 675 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 22 14:32:50 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:51 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:52 compute-1 ceph-mon[81715]: pgmap v1944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:52 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:52 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:54 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:54 compute-1 sudo[234162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:54 compute-1 sudo[234162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:54 compute-1 sudo[234162]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:54 compute-1 sudo[234187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:32:54 compute-1 sudo[234187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:54 compute-1 sudo[234187]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:55 compute-1 ceph-mon[81715]: pgmap v1945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:55 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:56 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:57 compute-1 ceph-mon[81715]: pgmap v1946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:57 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.484007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377484341, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1257, "num_deletes": 252, "total_data_size": 2148717, "memory_usage": 2179016, "flush_reason": "Manual Compaction"}
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377493941, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 920251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56283, "largest_seqno": 57535, "table_properties": {"data_size": 916035, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13120, "raw_average_key_size": 21, "raw_value_size": 906101, "raw_average_value_size": 1487, "num_data_blocks": 70, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092303, "oldest_key_time": 1769092303, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 9818 microseconds, and 4928 cpu microseconds.
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494089) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 920251 bytes OK
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494144) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495377) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495390) EVENT_LOG_v1 {"time_micros": 1769092377495386, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2142534, prev total WAL file size 2142534, number of live WAL files 2.
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.496489) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373538' seq:0, type:0; will stop at (end)
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(898KB)], [111(11MB)]
Jan 22 14:32:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377496549, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12472677, "oldest_snapshot_seqno": -1}
Jan 22 14:32:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 10090 keys, 9038002 bytes, temperature: kUnknown
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378054142, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 9038002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8982954, "index_size": 28696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 273005, "raw_average_key_size": 27, "raw_value_size": 8811278, "raw_average_value_size": 873, "num_data_blocks": 1070, "num_entries": 10090, "num_filter_entries": 10090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.055865) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9038002 bytes
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.060477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.4 rd, 16.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.4) write-amplify(9.8) OK, records in: 10576, records dropped: 486 output_compression: NoCompression
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.060495) EVENT_LOG_v1 {"time_micros": 1769092378060487, "job": 70, "event": "compaction_finished", "compaction_time_micros": 557684, "compaction_time_cpu_micros": 25114, "output_level": 6, "num_output_files": 1, "total_output_size": 9038002, "num_input_records": 10576, "num_output_records": 10090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378061396, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378064309, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.496369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:58 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:58 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:32:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:59 compute-1 ceph-mon[81715]: pgmap v1947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 14:32:59 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:00 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:01 compute-1 podman[234212]: 2026-01-22 14:33:01.05552091 +0000 UTC m=+0.050426709 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:33:01 compute-1 ceph-mon[81715]: pgmap v1948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 14:33:01 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:02 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:03 compute-1 ceph-mon[81715]: pgmap v1949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 56 KiB/s wr, 7 op/s
Jan 22 14:33:03 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:03 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:04 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:04 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/433948236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:05 compute-1 ceph-mon[81715]: pgmap v1950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:05 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:05 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/33695955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:06 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:06 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1940242111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:07 compute-1 ceph-mon[81715]: pgmap v1951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 14:33:07 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:08 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:08 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:08.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:09 compute-1 ceph-mon[81715]: pgmap v1952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 14:33:09 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:09.437 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:33:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:09.438 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:33:10 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:10 compute-1 ceph-mon[81715]: pgmap v1953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:10.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:11.440 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:33:12 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:12 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:12 compute-1 ceph-mon[81715]: pgmap v1954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:13 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:13 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:14 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:14 compute-1 ceph-mon[81715]: pgmap v1955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:15 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:16 compute-1 ceph-mon[81715]: pgmap v1956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:17 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:17 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:18 compute-1 podman[234232]: 2026-01-22 14:33:18.09457805 +0000 UTC m=+0.086941762 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 14:33:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:18 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:18 compute-1 ceph-mon[81715]: pgmap v1957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 14:33:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:33:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:33:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:19 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:20 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:20 compute-1 ceph-mon[81715]: pgmap v1958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 14:33:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:21 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:22.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:22 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:22 compute-1 ceph-mon[81715]: pgmap v1959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:23 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:23 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:24.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:24 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:24 compute-1 ceph-mon[81715]: pgmap v1960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:25 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:26.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:26 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:26 compute-1 ceph-mon[81715]: pgmap v1961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:27 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:28 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-1 ceph-mon[81715]: pgmap v1962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:28 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:29 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:30 compute-1 ceph-mon[81715]: pgmap v1963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:30 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:31 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:32 compute-1 podman[234259]: 2026-01-22 14:33:32.043811095 +0000 UTC m=+0.040017749 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 14:33:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:32.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:32 compute-1 ceph-mon[81715]: pgmap v1964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:32 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:32 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:34 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:34.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:35 compute-1 ceph-mon[81715]: pgmap v1965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:35 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:36 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:36.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:37 compute-1 ceph-mon[81715]: pgmap v1966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:37 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:38 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:38 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:38.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:39 compute-1 ceph-mon[81715]: pgmap v1967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:39 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:40 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:41 compute-1 ceph-mon[81715]: pgmap v1968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:41 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:42 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:42.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:43 compute-1 ceph-mon[81715]: pgmap v1969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:43 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:43 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:44 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:44.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:45 compute-1 ceph-mon[81715]: pgmap v1970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:45 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:46 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:47 compute-1 ceph-mon[81715]: pgmap v1971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:47 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.471 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.471 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.472 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:48 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:48 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:48.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:49 compute-1 podman[234279]: 2026-01-22 14:33:49.09863345 +0000 UTC m=+0.090510219 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:33:49 compute-1 ceph-mon[81715]: pgmap v1972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:49 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:50 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:50.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:51 compute-1 ceph-mon[81715]: pgmap v1973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:51 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:52 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:53 compute-1 ceph-mon[81715]: pgmap v1974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:53 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:53 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:54 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:54 compute-1 sudo[234305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:54 compute-1 sudo[234305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-1 sudo[234305]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-1 sudo[234330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:33:54 compute-1 sudo[234330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-1 sudo[234330]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-1 sudo[234355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:54 compute-1 sudo[234355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-1 sudo[234355]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:54 compute-1 sudo[234380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:33:54 compute-1 sudo[234380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:55 compute-1 ceph-mon[81715]: pgmap v1975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 340 B/s rd, 0 op/s
Jan 22 14:33:55 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:55 compute-1 sudo[234380]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:55 compute-1 sudo[234436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:55 compute-1 sudo[234436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:55 compute-1 sudo[234436]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:55 compute-1 sudo[234461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:33:55 compute-1 sudo[234461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:55 compute-1 sudo[234461]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:55 compute-1 sudo[234486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:55 compute-1 sudo[234486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:55 compute-1 sudo[234486]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:55 compute-1 sudo[234511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 14:33:55 compute-1 sudo[234511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:55 compute-1 sudo[234511]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:56 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:56 compute-1 ceph-mon[81715]: pgmap v1976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 14:33:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:57 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:57 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:33:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:33:58 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:58 compute-1 ceph-mon[81715]: pgmap v1977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:33:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:33:58 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 22 14:33:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:33:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:59.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:59 compute-1 ceph-mon[81715]: osdmap e151: 3 total, 3 up, 3 in
Jan 22 14:33:59 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:00 compute-1 ceph-mon[81715]: pgmap v1979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 608 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 819 KiB/s wr, 7 op/s
Jan 22 14:34:00 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:03 compute-1 podman[234554]: 2026-01-22 14:34:03.076604886 +0000 UTC m=+0.072133784 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:34:03 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:03 compute-1 ceph-mon[81715]: pgmap v1980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 22 14:34:03 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:03 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:05 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:05 compute-1 ceph-mon[81715]: pgmap v1981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 32 op/s
Jan 22 14:34:05 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2051152007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:34:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:34:05 compute-1 sudo[234574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:05 compute-1 sudo[234574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:05 compute-1 sudo[234574]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:05 compute-1 sudo[234599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:34:05 compute-1 sudo[234599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:05 compute-1 sudo[234599]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:06 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:06 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2576899971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:07.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:07 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:07 compute-1 ceph-mon[81715]: pgmap v1982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 14:34:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:08 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:08 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:08 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2163323487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:09 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:09 compute-1 ceph-mon[81715]: pgmap v1983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 14:34:09 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:09.878 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:34:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:09.879 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:34:10 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:11 compute-1 ceph-mon[81715]: pgmap v1984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 22 14:34:11 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:12 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:12 compute-1 ceph-mon[81715]: pgmap v1985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 27 op/s
Jan 22 14:34:12 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3564186839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:12 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:12.881 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:13 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:13 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:13 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1951221719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:14 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:14 compute-1 ceph-mon[81715]: pgmap v1986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 640 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 409 KiB/s wr, 30 op/s
Jan 22 14:34:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:15 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:16 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:16 compute-1 ceph-mon[81715]: pgmap v1987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.887841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456887872, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1349, "num_deletes": 251, "total_data_size": 2458298, "memory_usage": 2500680, "flush_reason": "Manual Compaction"}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456898057, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1594051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57540, "largest_seqno": 58884, "table_properties": {"data_size": 1588533, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1576433, "raw_average_value_size": 2314, "num_data_blocks": 118, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092378, "oldest_key_time": 1769092378, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 10267 microseconds, and 4372 cpu microseconds.
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.898105) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1594051 bytes OK
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.898125) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899302) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899317) EVENT_LOG_v1 {"time_micros": 1769092456899312, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899333) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2451712, prev total WAL file size 2451712, number of live WAL files 2.
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.900139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(1556KB)], [114(8826KB)]
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456900184, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 10632053, "oldest_snapshot_seqno": -1}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 10250 keys, 8931758 bytes, temperature: kUnknown
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456943359, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 8931758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8875929, "index_size": 29093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 277564, "raw_average_key_size": 27, "raw_value_size": 8701570, "raw_average_value_size": 848, "num_data_blocks": 1083, "num_entries": 10250, "num_filter_entries": 10250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.943622) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8931758 bytes
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.945500) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.8 rd, 206.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 10771, records dropped: 521 output_compression: NoCompression
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.945516) EVENT_LOG_v1 {"time_micros": 1769092456945507, "job": 72, "event": "compaction_finished", "compaction_time_micros": 43254, "compaction_time_cpu_micros": 21319, "output_level": 6, "num_output_files": 1, "total_output_size": 8931758, "num_input_records": 10771, "num_output_records": 10250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456945845, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456947381, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:17 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:17 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 14:34:17 compute-1 ceph-mon[81715]: from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 14:34:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:18 compute-1 ceph-mon[81715]: from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 14:34:18 compute-1 ceph-mon[81715]: osdmap e152: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-1 ceph-mon[81715]: pgmap v1989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 22 14:34:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:34:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:34:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:20 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:20 compute-1 ceph-mon[81715]: osdmap e153: 3 total, 3 up, 3 in
Jan 22 14:34:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2725787254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:20 compute-1 podman[234624]: 2026-01-22 14:34:20.096457688 +0000 UTC m=+0.088104183 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:34:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:21 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:21 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1676911576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:21 compute-1 ceph-mon[81715]: pgmap v1991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 22 14:34:21 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:22 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:23 compute-1 ceph-mon[81715]: pgmap v1992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 22 14:34:23 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:23 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1366398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:24 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:25 compute-1 ceph-mon[81715]: pgmap v1993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 23 KiB/s wr, 186 op/s
Jan 22 14:34:26 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:27 compute-1 ceph-mon[81715]: pgmap v1994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 1.9 MiB/s wr, 294 op/s
Jan 22 14:34:27 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:28 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:29 compute-1 ceph-mon[81715]: pgmap v1995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.6 MiB/s wr, 242 op/s
Jan 22 14:34:29 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:30 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3535101709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:31 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:31 compute-1 ceph-mon[81715]: pgmap v1996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 715 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.5 MiB/s wr, 246 op/s
Jan 22 14:34:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2532096136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:32 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:33 compute-1 ceph-mon[81715]: pgmap v1997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 745 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 227 op/s
Jan 22 14:34:33 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:34 compute-1 podman[234650]: 2026-01-22 14:34:34.06381052 +0000 UTC m=+0.055417544 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 22 14:34:34 compute-1 ceph-mon[81715]: pgmap v1998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Jan 22 14:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 3163 syncs, 3.46 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1666 writes, 5174 keys, 1666 commit groups, 1.0 writes per commit group, ingest: 5.52 MB, 0.01 MB/s
                                           Interval WAL: 1666 writes, 732 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:34:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:36 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:36 compute-1 ceph-mon[81715]: pgmap v1999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 22 14:34:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:37 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:38 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:38 compute-1 ceph-mon[81715]: pgmap v2000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 14:34:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:39 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:40 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:40 compute-1 ceph-mon[81715]: pgmap v2001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 14:34:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:41 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:42 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:42 compute-1 ceph-mon[81715]: pgmap v2002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 22 14:34:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:43 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:43 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:44 compute-1 ceph-mon[81715]: pgmap v2003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 88 KiB/s wr, 14 op/s
Jan 22 14:34:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:45 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:46 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:46 compute-1 ceph-mon[81715]: pgmap v2004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 14:34:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.472 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:47 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:48 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:48 compute-1 ceph-mon[81715]: pgmap v2005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 22 14:34:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:49 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:50 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:50 compute-1 ceph-mon[81715]: pgmap v2006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 14:34:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:51 compute-1 podman[234670]: 2026-01-22 14:34:51.108735628 +0000 UTC m=+0.094367203 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:34:51 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:52 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:52 compute-1 ceph-mon[81715]: pgmap v2007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:34:52 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:52.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:54 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:55 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:55 compute-1 ceph-mon[81715]: pgmap v2008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:34:56 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:56.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:57 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:57 compute-1 ceph-mon[81715]: pgmap v2009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 14:34:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:58 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:58 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:34:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:59 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:59 compute-1 ceph-mon[81715]: pgmap v2010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 14:35:00 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:01 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:01 compute-1 ceph-mon[81715]: pgmap v2011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 2.8 KiB/s wr, 0 op/s
Jan 22 14:35:02 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:02.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:03 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:03 compute-1 ceph-mon[81715]: pgmap v2012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 1.9 KiB/s wr, 1 op/s
Jan 22 14:35:03 compute-1 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:04 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:05 compute-1 podman[234698]: 2026-01-22 14:35:05.085734809 +0000 UTC m=+0.081705682 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 22 14:35:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:05 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:05 compute-1 ceph-mon[81715]: pgmap v2013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 3.2 KiB/s wr, 1 op/s
Jan 22 14:35:05 compute-1 sudo[234717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:05 compute-1 sudo[234717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-1 sudo[234717]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-1 sudo[234742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:35:05 compute-1 sudo[234742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-1 sudo[234742]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-1 sudo[234767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:05 compute-1 sudo[234767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-1 sudo[234767]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-1 sudo[234792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:35:05 compute-1 sudo[234792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:06 compute-1 podman[234890]: 2026-01-22 14:35:06.372690055 +0000 UTC m=+0.079486281 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:35:06 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:06 compute-1 podman[234890]: 2026-01-22 14:35:06.487069296 +0000 UTC m=+0.193865532 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:35:06 compute-1 sudo[234792]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:07 compute-1 sudo[235014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:07 compute-1 sudo[235014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235014]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 sudo[235039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:35:07 compute-1 sudo[235039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235039]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:07 compute-1 sudo[235064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:07 compute-1 sudo[235064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235064]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 sudo[235089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:35:07 compute-1 sudo[235089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:07 compute-1 ceph-mon[81715]: pgmap v2014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-1 sudo[235089]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:07 compute-1 sudo[235146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:07 compute-1 sudo[235146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235146]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 sudo[235171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:35:07 compute-1 sudo[235171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235171]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 sudo[235196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:07 compute-1 sudo[235196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-1 sudo[235196]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-1 sudo[235221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 14:35:07 compute-1 sudo[235221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:08 compute-1 podman[235286]: 2026-01-22 14:35:08.33539481 +0000 UTC m=+0.044427157 container create c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 14:35:08 compute-1 systemd[1]: Started libpod-conmon-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope.
Jan 22 14:35:08 compute-1 podman[235286]: 2026-01-22 14:35:08.317850728 +0000 UTC m=+0.026883105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:35:08 compute-1 systemd[1]: Started libcrun container.
Jan 22 14:35:08 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:08 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:08 compute-1 podman[235286]: 2026-01-22 14:35:08.43823042 +0000 UTC m=+0.147262797 container init c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 14:35:08 compute-1 podman[235286]: 2026-01-22 14:35:08.448995799 +0000 UTC m=+0.158028156 container start c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 14:35:08 compute-1 podman[235286]: 2026-01-22 14:35:08.453129821 +0000 UTC m=+0.162162178 container attach c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:35:08 compute-1 suspicious_swartz[235302]: 167 167
Jan 22 14:35:08 compute-1 systemd[1]: libpod-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope: Deactivated successfully.
Jan 22 14:35:08 compute-1 podman[235307]: 2026-01-22 14:35:08.489538551 +0000 UTC m=+0.025538749 container died c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 14:35:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-22f0ba300e207645b24e39b449db2e5248126083b603c759ad91eca8b34d8548-merged.mount: Deactivated successfully.
Jan 22 14:35:08 compute-1 podman[235307]: 2026-01-22 14:35:08.529703932 +0000 UTC m=+0.065704100 container remove c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 14:35:08 compute-1 systemd[1]: libpod-conmon-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope: Deactivated successfully.
Jan 22 14:35:08 compute-1 podman[235326]: 2026-01-22 14:35:08.681718087 +0000 UTC m=+0.042329542 container create 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 14:35:08 compute-1 systemd[1]: Started libpod-conmon-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope.
Jan 22 14:35:08 compute-1 systemd[1]: Started libcrun container.
Jan 22 14:35:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 14:35:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 14:35:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 14:35:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 14:35:08 compute-1 podman[235326]: 2026-01-22 14:35:08.662953911 +0000 UTC m=+0.023565396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:35:08 compute-1 podman[235326]: 2026-01-22 14:35:08.760868818 +0000 UTC m=+0.121480293 container init 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:35:08 compute-1 podman[235326]: 2026-01-22 14:35:08.767855426 +0000 UTC m=+0.128466881 container start 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 14:35:08 compute-1 podman[235326]: 2026-01-22 14:35:08.771867124 +0000 UTC m=+0.132478619 container attach 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 14:35:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:09 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:09 compute-1 ceph-mon[81715]: pgmap v2015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-1 epic_thompson[235342]: [
Jan 22 14:35:10 compute-1 epic_thompson[235342]:     {
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "available": false,
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "ceph_device": false,
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "lsm_data": {},
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "lvs": [],
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "path": "/dev/sr0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "rejected_reasons": [
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "Insufficient space (<5GB)",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "Has a FileSystem"
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         ],
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         "sys_api": {
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "actuators": null,
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "device_nodes": "sr0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "devname": "sr0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "human_readable_size": "482.00 KB",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "id_bus": "ata",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "model": "QEMU DVD-ROM",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "nr_requests": "2",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "parent": "/dev/sr0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "partitions": {},
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "path": "/dev/sr0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "removable": "1",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "rev": "2.5+",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "ro": "0",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "rotational": "1",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "sas_address": "",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "sas_device_handle": "",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "scheduler_mode": "mq-deadline",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "sectors": 0,
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "sectorsize": "2048",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "size": 493568.0,
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "support_discard": "2048",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "type": "disk",
Jan 22 14:35:10 compute-1 epic_thompson[235342]:             "vendor": "QEMU"
Jan 22 14:35:10 compute-1 epic_thompson[235342]:         }
Jan 22 14:35:10 compute-1 epic_thompson[235342]:     }
Jan 22 14:35:10 compute-1 epic_thompson[235342]: ]
Jan 22 14:35:10 compute-1 systemd[1]: libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Deactivated successfully.
Jan 22 14:35:10 compute-1 systemd[1]: libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Consumed 1.299s CPU time.
Jan 22 14:35:10 compute-1 conmon[235342]: conmon 37d098f5b84bb883da95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope/container/memory.events
Jan 22 14:35:10 compute-1 podman[235326]: 2026-01-22 14:35:10.042270585 +0000 UTC m=+1.402882040 container died 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 14:35:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c-merged.mount: Deactivated successfully.
Jan 22 14:35:10 compute-1 podman[235326]: 2026-01-22 14:35:10.102425375 +0000 UTC m=+1.463036830 container remove 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 14:35:10 compute-1 systemd[1]: libpod-conmon-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Deactivated successfully.
Jan 22 14:35:10 compute-1 sudo[235221]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:10 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:35:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:35:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:11 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:11 compute-1 ceph-mon[81715]: pgmap v2016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:12 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:13 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:13 compute-1 ceph-mon[81715]: pgmap v2017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 22 14:35:13 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:14 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:14 compute-1 ceph-mon[81715]: pgmap v2018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 22 14:35:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:15 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:16 compute-1 sudo[236612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:16 compute-1 sudo[236612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:16 compute-1 sudo[236612]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:16 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:16 compute-1 ceph-mon[81715]: pgmap v2019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 22 14:35:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:16 compute-1 sudo[236637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:35:16 compute-1 sudo[236637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:16 compute-1 sudo[236637]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:17 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:17 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:35:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:35:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:35:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:35:18 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:18 compute-1 ceph-mon[81715]: pgmap v2020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:35:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:35:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:19 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:20 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:20 compute-1 ceph-mon[81715]: pgmap v2021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:22 compute-1 podman[236662]: 2026-01-22 14:35:22.109011724 +0000 UTC m=+0.093732336 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 14:35:22 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:23 compute-1 ceph-mon[81715]: pgmap v2022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:23 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:24 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:24 compute-1 ceph-mon[81715]: pgmap v2023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:25 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:26 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:26 compute-1 ceph-mon[81715]: pgmap v2024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:27 compute-1 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:27 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:28 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:28 compute-1 ceph-mon[81715]: pgmap v2025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:29 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:30 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:30 compute-1 ceph-mon[81715]: pgmap v2026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:31 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1902 writes, 9828 keys, 1902 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s
                                           Interval WAL: 1903 writes, 1903 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     61.1      1.05              0.21        36    0.029       0      0       0.0       0.0
                                             L6      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    125.7    106.7      2.93              0.88        35    0.084    271K    19K       0.0       0.0
                                            Sum      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     92.5     94.6      3.99              1.08        71    0.056    271K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     80.1     80.4      0.95              0.21        14    0.068     72K   3610       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    125.7    106.7      2.93              0.88        35    0.084    271K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     61.2      1.05              0.21        35    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.063, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.37 GB write, 0.10 MB/s write, 0.36 GB read, 0.10 MB/s read, 4.0 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 40.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000269 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2162,39.07 MB,12.8518%) FilterBlock(71,759.30 KB,0.243915%) IndexBlock(71,1.03 MB,0.340045%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:35:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:32 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:32 compute-1 ceph-mon[81715]: pgmap v2027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:33 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:33 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:34 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:34 compute-1 ceph-mon[81715]: pgmap v2028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:34.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:35 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:36 compute-1 podman[236688]: 2026-01-22 14:35:36.054974329 +0000 UTC m=+0.042031393 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 14:35:36 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:36 compute-1 ceph-mon[81715]: pgmap v2029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:37 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:38 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:38 compute-1 ceph-mon[81715]: pgmap v2030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:39 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:39 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:40 compute-1 ceph-mon[81715]: pgmap v2031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:40 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:40.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:41.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:41 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:43 compute-1 ceph-mon[81715]: pgmap v2032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:43 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:43 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:43.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:44 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:45 compute-1 ceph-mon[81715]: pgmap v2033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:45 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:45.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:46 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:47 compute-1 ceph-mon[81715]: pgmap v2034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:47 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.474 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.474 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:35:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:48 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:48 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:49 compute-1 ceph-mon[81715]: pgmap v2035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:49 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:50 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:51.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:51 compute-1 ceph-mon[81715]: pgmap v2036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:51 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:52 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:53.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:53 compute-1 podman[236707]: 2026-01-22 14:35:53.105806203 +0000 UTC m=+0.088424922 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:35:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:53 compute-1 ceph-mon[81715]: pgmap v2037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:53 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:53 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:54 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:55 compute-1 ceph-mon[81715]: pgmap v2038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:55 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:56 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:57 compute-1 ceph-mon[81715]: pgmap v2039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:57 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:58 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:35:58 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:35:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:59 compute-1 ceph-mon[81715]: pgmap v2040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:59 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:00 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:36:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:36:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:01 compute-1 ceph-mon[81715]: pgmap v2041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:01 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:02 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.545337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562545419, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1723, "num_deletes": 255, "total_data_size": 3240435, "memory_usage": 3312272, "flush_reason": "Manual Compaction"}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562558456, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2109887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58889, "largest_seqno": 60607, "table_properties": {"data_size": 2103097, "index_size": 3605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17067, "raw_average_key_size": 20, "raw_value_size": 2088285, "raw_average_value_size": 2546, "num_data_blocks": 155, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092457, "oldest_key_time": 1769092457, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 13172 microseconds, and 5856 cpu microseconds.
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558521) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2109887 bytes OK
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558540) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559526) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559540) EVENT_LOG_v1 {"time_micros": 1769092562559535, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3232307, prev total WAL file size 3232307, number of live WAL files 2.
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2060KB)], [117(8722KB)]
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562560401, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 11041645, "oldest_snapshot_seqno": -1}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 10539 keys, 10878286 bytes, temperature: kUnknown
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562614760, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 10878286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818855, "index_size": 31991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 285162, "raw_average_key_size": 27, "raw_value_size": 10637780, "raw_average_value_size": 1009, "num_data_blocks": 1202, "num_entries": 10539, "num_filter_entries": 10539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.615005) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10878286 bytes
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.616304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 199.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 11070, records dropped: 531 output_compression: NoCompression
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.616319) EVENT_LOG_v1 {"time_micros": 1769092562616311, "job": 74, "event": "compaction_finished", "compaction_time_micros": 54436, "compaction_time_cpu_micros": 25084, "output_level": 6, "num_output_files": 1, "total_output_size": 10878286, "num_input_records": 11070, "num_output_records": 10539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562616831, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562618235, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:03.044 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:36:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:03.045 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:36:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:03 compute-1 ceph-mon[81715]: pgmap v2042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:03 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:03 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:04 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:05 compute-1 ceph-mon[81715]: pgmap v2043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:05 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:06 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:06 compute-1 ceph-mon[81715]: pgmap v2044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:07 compute-1 podman[236730]: 2026-01-22 14:36:07.077374887 +0000 UTC m=+0.058083695 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:36:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:07 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:08 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:08 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:08 compute-1 ceph-mon[81715]: pgmap v2045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:09.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:09 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:10 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:10 compute-1 ceph-mon[81715]: pgmap v2046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:11 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:12 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:12.047 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:36:12 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:12 compute-1 ceph-mon[81715]: pgmap v2047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:13.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:13 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:13 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:14 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:14 compute-1 ceph-mon[81715]: pgmap v2048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:14 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:15 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:16 compute-1 ceph-mon[81715]: pgmap v2049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:16 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:16 compute-1 sudo[236750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:16 compute-1 sudo[236750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-1 sudo[236750]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:16 compute-1 sudo[236775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:36:16 compute-1 sudo[236775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-1 sudo[236775]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:16 compute-1 sudo[236800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:16 compute-1 sudo[236800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-1 sudo[236800]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:16 compute-1 sudo[236825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:36:16 compute-1 sudo[236825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:17 compute-1 sudo[236825]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:18 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:18 compute-1 ceph-mon[81715]: pgmap v2050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:36:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:36:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:36:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:36:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:19 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:36:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:36:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:36:20 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:20 compute-1 ceph-mon[81715]: pgmap v2051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:21.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:21 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:23 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:23 compute-1 ceph-mon[81715]: pgmap v2052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:23 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:23.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:24 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:24 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:24 compute-1 podman[236881]: 2026-01-22 14:36:24.087786277 +0000 UTC m=+0.082547435 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:36:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:25 compute-1 sudo[236907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:25 compute-1 sudo[236907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:25 compute-1 sudo[236907]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:25 compute-1 sudo[236932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:36:25 compute-1 sudo[236932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:25 compute-1 sudo[236932]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:25 compute-1 ceph-mon[81715]: pgmap v2053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:25 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:26 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:27.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:27 compute-1 ceph-mon[81715]: pgmap v2054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:27 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:28 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:28 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:29.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:29.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:30 compute-1 ceph-mon[81715]: pgmap v2055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:30 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:31 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-1 ceph-mon[81715]: pgmap v2056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:31 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:32 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:33 compute-1 ceph-mon[81715]: pgmap v2057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:33 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:34 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:34 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:35.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:35 compute-1 ceph-mon[81715]: pgmap v2058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:35 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:36 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:37 compute-1 ceph-mon[81715]: pgmap v2059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:37 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:38 compute-1 podman[236957]: 2026-01-22 14:36:38.075221159 +0000 UTC m=+0.062730030 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 14:36:38 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:39 compute-1 ceph-mon[81715]: pgmap v2060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:39 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:40 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:41 compute-1 ceph-mon[81715]: pgmap v2061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:41 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:42 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:36:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:36:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:43.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:43 compute-1 ceph-mon[81715]: pgmap v2062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:43 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:43 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:44 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:45.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:45 compute-1 ceph-mon[81715]: pgmap v2063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:45 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:46 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:47.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:47 compute-1 ceph-mon[81715]: pgmap v2064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:47 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:36:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:48 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:48.699 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:36:48 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:48.700 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:36:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:49.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:49 compute-1 ceph-mon[81715]: pgmap v2065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:51 compute-1 ceph-mon[81715]: pgmap v2066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:51 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:36:51.702 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:36:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:53 compute-1 ceph-mon[81715]: pgmap v2067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:53 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:54 compute-1 ceph-mon[81715]: pgmap v2068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:55.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:55 compute-1 podman[236976]: 2026-01-22 14:36:55.108511523 +0000 UTC m=+0.087650491 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 14:36:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:36:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:36:57 compute-1 ceph-mon[81715]: pgmap v2069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:58 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.351032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618351070, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 966, "num_deletes": 251, "total_data_size": 1655403, "memory_usage": 1681088, "flush_reason": "Manual Compaction"}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618358248, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 1077454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60612, "largest_seqno": 61573, "table_properties": {"data_size": 1073224, "index_size": 1818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10654, "raw_average_key_size": 20, "raw_value_size": 1064219, "raw_average_value_size": 2027, "num_data_blocks": 79, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092563, "oldest_key_time": 1769092563, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7279 microseconds, and 3502 cpu microseconds.
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.358302) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 1077454 bytes OK
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.358326) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360294) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360335) EVENT_LOG_v1 {"time_micros": 1769092618360327, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360357) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1650442, prev total WAL file size 1650442, number of live WAL files 2.
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(1052KB)], [120(10MB)]
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618361220, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 11955740, "oldest_snapshot_seqno": -1}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 10549 keys, 10380562 bytes, temperature: kUnknown
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618416370, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 10380562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10321358, "index_size": 31700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 286354, "raw_average_key_size": 27, "raw_value_size": 10140404, "raw_average_value_size": 961, "num_data_blocks": 1185, "num_entries": 10549, "num_filter_entries": 10549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.416649) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10380562 bytes
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.418517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 187.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(20.7) write-amplify(9.6) OK, records in: 11064, records dropped: 515 output_compression: NoCompression
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.418559) EVENT_LOG_v1 {"time_micros": 1769092618418529, "job": 76, "event": "compaction_finished", "compaction_time_micros": 55242, "compaction_time_cpu_micros": 28673, "output_level": 6, "num_output_files": 1, "total_output_size": 10380562, "num_input_records": 11064, "num_output_records": 10549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618418851, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618420731, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:36:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:36:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:36:59 compute-1 ceph-mon[81715]: pgmap v2070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:01 compute-1 ceph-mon[81715]: pgmap v2071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:37:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:03 compute-1 ceph-mon[81715]: pgmap v2072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:37:03 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:04 compute-1 ceph-mon[81715]: pgmap v2073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 22 14:37:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:07 compute-1 ceph-mon[81715]: pgmap v2074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:08 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:09 compute-1 podman[237002]: 2026-01-22 14:37:09.053484859 +0000 UTC m=+0.049992307 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 14:37:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:09.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:09 compute-1 ceph-mon[81715]: pgmap v2075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:11 compute-1 ceph-mon[81715]: pgmap v2076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:12 compute-1 ceph-mon[81715]: pgmap v2077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 14:37:12 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 14:37:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 14:37:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:15 compute-1 ceph-mon[81715]: pgmap v2078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 14:37:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:17 compute-1 ceph-mon[81715]: pgmap v2079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 22 14:37:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:18 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:18 compute-1 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:19 compute-1 ceph-mon[81715]: pgmap v2080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:19 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:37:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:37:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:21 compute-1 ceph-mon[81715]: pgmap v2081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:21 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:22 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:23 compute-1 ceph-mon[81715]: pgmap v2082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:23 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:23 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:24 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:25 compute-1 ceph-mon[81715]: pgmap v2083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:25 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:25 compute-1 sudo[237021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:25 compute-1 sudo[237021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-1 sudo[237021]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-1 sudo[237052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:37:25 compute-1 sudo[237052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-1 sudo[237052]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-1 podman[237045]: 2026-01-22 14:37:25.605177155 +0000 UTC m=+0.076705137 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 22 14:37:25 compute-1 sudo[237098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:25 compute-1 sudo[237098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-1 sudo[237098]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-1 sudo[237123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:37:25 compute-1 sudo[237123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-1 sudo[237123]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:26 compute-1 sudo[237168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:26 compute-1 sudo[237168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-1 sudo[237168]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:26 compute-1 sudo[237193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:37:26 compute-1 sudo[237193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-1 sudo[237193]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:26 compute-1 sudo[237218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:26 compute-1 sudo[237218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-1 sudo[237218]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:26 compute-1 sudo[237243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:37:26 compute-1 sudo[237243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:26 compute-1 sudo[237243]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:27 compute-1 ceph-mon[81715]: pgmap v2084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:27 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:37:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:37:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:27.752 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:37:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:27.752 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:37:28 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:28 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:29.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:29 compute-1 ceph-mon[81715]: pgmap v2085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:29 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:30 compute-1 ceph-mon[81715]: pgmap v2086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:31 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:31.755 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:37:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:32 compute-1 ceph-mon[81715]: pgmap v2087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:32 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:32 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:33 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:34 compute-1 sudo[237299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:34 compute-1 sudo[237299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:34 compute-1 sudo[237299]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:34 compute-1 sudo[237324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:37:34 compute-1 sudo[237324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:34 compute-1 sudo[237324]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:35 compute-1 ceph-mon[81715]: pgmap v2088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:35 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:35.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:36 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:37 compute-1 ceph-mon[81715]: pgmap v2089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:37 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:37.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:38 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:39 compute-1 ceph-mon[81715]: pgmap v2090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:39 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:40 compute-1 podman[237349]: 2026-01-22 14:37:40.059442848 +0000 UTC m=+0.052456693 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:37:40 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:41 compute-1 ceph-mon[81715]: pgmap v2091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:41 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:41.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:42 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:43 compute-1 sshd-session[237368]: banner exchange: Connection from 3.132.23.201 port 49840: invalid format
Jan 22 14:37:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:43 compute-1 ceph-mon[81715]: pgmap v2092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:43 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:43 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:44 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:45.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:45 compute-1 ceph-mon[81715]: pgmap v2093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:45 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:46 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.476 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.476 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.477 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:37:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:47 compute-1 ceph-mon[81715]: pgmap v2094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:47 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:48 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:48 compute-1 ceph-mon[81715]: pgmap v2095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:49 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:50 compute-1 ceph-mon[81715]: pgmap v2096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:50 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:51.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:51 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:52 compute-1 ceph-mon[81715]: pgmap v2097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:52 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:52 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:53.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:53 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:54 compute-1 ceph-mon[81715]: pgmap v2098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:54 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:37:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:55.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:37:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:55.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:55 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:56 compute-1 podman[237369]: 2026-01-22 14:37:56.106756692 +0000 UTC m=+0.098027461 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:37:56 compute-1 ceph-mon[81715]: pgmap v2099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:56 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:37:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:37:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:58 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:58 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:59 compute-1 ceph-mon[81715]: pgmap v2100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:59 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:37:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:00 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:01 compute-1 ceph-mon[81715]: pgmap v2101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:01 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:38:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:38:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:38:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:38:02 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:03 compute-1 ceph-mon[81715]: pgmap v2102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:03 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:03 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:04 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:05 compute-1 ceph-mon[81715]: pgmap v2103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:05 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:06 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:07 compute-1 ceph-mon[81715]: pgmap v2104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:07 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:08 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:08 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:08 compute-1 ceph-mon[81715]: pgmap v2105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:09 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-1 ceph-mon[81715]: pgmap v2106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:10 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:11 compute-1 podman[237396]: 2026-01-22 14:38:11.103987988 +0000 UTC m=+0.060416517 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:38:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:11 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:12 compute-1 ceph-mon[81715]: pgmap v2107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:12 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:12 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:13.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:13.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:13 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:14 compute-1 ceph-mon[81715]: pgmap v2108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:14 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:15 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:16 compute-1 ceph-mon[81715]: pgmap v2109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:16 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:17.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:17 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:17 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:18 compute-1 ceph-mon[81715]: pgmap v2110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:19.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:19 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:20 compute-1 ceph-mon[81715]: pgmap v2111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:21 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:21.329 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:38:21 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:21.331 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:38:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:21 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:22 compute-1 ceph-mon[81715]: pgmap v2112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:22 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:22 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:23.333 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:38:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:23.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:23 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:24 compute-1 ceph-mon[81715]: pgmap v2113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:24 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:25 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:26 compute-1 ceph-mon[81715]: pgmap v2114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:26 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:27 compute-1 podman[237413]: 2026-01-22 14:38:27.082531269 +0000 UTC m=+0.078520215 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 22 14:38:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:38:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:38:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:27 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:27 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:28 compute-1 ceph-mon[81715]: pgmap v2115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:28 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:29.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:31 compute-1 ceph-mon[81715]: pgmap v2116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:31 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:32 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:33 compute-1 ceph-mon[81715]: pgmap v2117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:33 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:33 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:34 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:34 compute-1 sudo[237438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:34 compute-1 sudo[237438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-1 sudo[237438]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-1 sudo[237463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:38:34 compute-1 sudo[237463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-1 sudo[237463]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-1 sudo[237488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:34 compute-1 sudo[237488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-1 sudo[237488]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-1 sudo[237513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:38:34 compute-1 sudo[237513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:35 compute-1 sudo[237513]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:35 compute-1 ceph-mon[81715]: pgmap v2118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:35 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:38:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:36 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:38:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:37 compute-1 ceph-mon[81715]: pgmap v2119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:37 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:38:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:38.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:38:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:38 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:39.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:39 compute-1 ceph-mon[81715]: pgmap v2120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:39 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:40 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:40 compute-1 ceph-mon[81715]: pgmap v2121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:41 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:38:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:38:42 compute-1 podman[237567]: 2026-01-22 14:38:42.069764065 +0000 UTC m=+0.052025033 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:38:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:42.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:42 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:42 compute-1 ceph-mon[81715]: pgmap v2122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:43 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:43 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:44 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:44 compute-1 ceph-mon[81715]: pgmap v2123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:45 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:46 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:46 compute-1 ceph-mon[81715]: pgmap v2124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:47 compute-1 sudo[237587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:47 compute-1 sudo[237587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:47 compute-1 sudo[237587]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:47 compute-1 sudo[237612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:38:47 compute-1 sudo[237612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:47 compute-1 sudo[237612]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.477 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:38:47 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:47 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:47 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:48 compute-1 ceph-mon[81715]: pgmap v2125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:49 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:50 compute-1 ceph-mon[81715]: pgmap v2126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:50 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:51 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:52 compute-1 ceph-mon[81715]: pgmap v2127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:52 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:52 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:38:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:38:53 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:54 compute-1 ceph-mon[81715]: pgmap v2128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:54 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:55 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:56.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:56 compute-1 ceph-mon[81715]: pgmap v2129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:56 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:38:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:38:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:57 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:57 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:58 compute-1 podman[237637]: 2026-01-22 14:38:58.098185129 +0000 UTC m=+0.088803581 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 14:38:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:59 compute-1 ceph-mon[81715]: pgmap v2130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:59 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:38:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:00 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:01 compute-1 ceph-mon[81715]: pgmap v2131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:01 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:02 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:03 compute-1 ceph-mon[81715]: pgmap v2132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:03 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:03 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:04 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:05 compute-1 ceph-mon[81715]: pgmap v2133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:05 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:06 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:07 compute-1 ceph-mon[81715]: pgmap v2134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:07 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:08 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:08 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:08.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:09 compute-1 ceph-mon[81715]: pgmap v2135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:09 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:10 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:11 compute-1 ceph-mon[81715]: pgmap v2136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:11 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:12 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:13 compute-1 podman[237664]: 2026-01-22 14:39:13.094623574 +0000 UTC m=+0.074055266 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:39:13 compute-1 ceph-mon[81715]: pgmap v2137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:13 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:13 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:13.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:14 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:15.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:15 compute-1 ceph-mon[81715]: pgmap v2138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:15 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:16 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:17 compute-1 ceph-mon[81715]: pgmap v2139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:17 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:39:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:39:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:39:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:39:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:18 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:18 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:39:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:39:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:19 compute-1 ceph-mon[81715]: pgmap v2140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:19 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:20 compute-1 ceph-mon[81715]: pgmap v2141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:21.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:22 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:23.128 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:39:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:23.129 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:39:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:23.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:23 compute-1 ceph-mon[81715]: pgmap v2142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:23 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:23 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:24 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:25.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:25 compute-1 ceph-mon[81715]: pgmap v2143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:25 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:26 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:26 compute-1 ceph-mon[81715]: pgmap v2144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:27.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:27 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:28 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:28 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:28 compute-1 ceph-mon[81715]: pgmap v2145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:29 compute-1 podman[237683]: 2026-01-22 14:39:29.107710366 +0000 UTC m=+0.097833157 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 14:39:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:29.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:29 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:30.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:30 compute-1 ceph-mon[81715]: pgmap v2146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:31 compute-1 ceph-mon[81715]: 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:39:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:32.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:32 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:39:32 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:39:32 compute-1 ceph-mon[81715]: pgmap v2147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:32 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:32 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:33.132 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:39:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:33 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:35 compute-1 ceph-mon[81715]: pgmap v2148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 739 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 22 14:39:35 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:35.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:36 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:37 compute-1 ceph-mon[81715]: pgmap v2149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:37 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:38 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:38.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:39 compute-1 ceph-mon[81715]: pgmap v2150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:39 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:40 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:40.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:41 compute-1 ceph-mon[81715]: pgmap v2151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:41 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:41.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:41 compute-1 sshd-session[237709]: banner exchange: Connection from 3.132.23.201 port 55208: invalid format
Jan 22 14:39:42 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:43 compute-1 ceph-mon[81715]: pgmap v2152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:43 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:43 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:44 compute-1 podman[237710]: 2026-01-22 14:39:44.085324512 +0000 UTC m=+0.062426222 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:39:44 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:45 compute-1 ceph-mon[81715]: pgmap v2153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.271909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785271957, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2424, "num_deletes": 251, "total_data_size": 4875919, "memory_usage": 4950208, "flush_reason": "Manual Compaction"}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785287910, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 3171350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61579, "largest_seqno": 63997, "table_properties": {"data_size": 3162263, "index_size": 5325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22746, "raw_average_key_size": 21, "raw_value_size": 3142323, "raw_average_value_size": 2939, "num_data_blocks": 228, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092619, "oldest_key_time": 1769092619, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 16026 microseconds, and 6016 cpu microseconds.
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.287946) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 3171350 bytes OK
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.287961) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289578) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289593) EVENT_LOG_v1 {"time_micros": 1769092785289589, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 4864928, prev total WAL file size 4864928, number of live WAL files 2.
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.291068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(3097KB)], [123(10137KB)]
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785291138, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13551912, "oldest_snapshot_seqno": -1}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 11099 keys, 11911206 bytes, temperature: kUnknown
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785347583, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11911206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11847570, "index_size": 34788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 299422, "raw_average_key_size": 26, "raw_value_size": 11655881, "raw_average_value_size": 1050, "num_data_blocks": 1311, "num_entries": 11099, "num_filter_entries": 11099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.347890) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11911206 bytes
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.349604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.6 rd, 210.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11618, records dropped: 519 output_compression: NoCompression
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.349621) EVENT_LOG_v1 {"time_micros": 1769092785349613, "job": 78, "event": "compaction_finished", "compaction_time_micros": 56557, "compaction_time_cpu_micros": 28410, "output_level": 6, "num_output_files": 1, "total_output_size": 11911206, "num_input_records": 11618, "num_output_records": 11099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785350281, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 22 14:39:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785352498, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.290982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:46 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:46 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:47 compute-1 ceph-mon[81715]: pgmap v2154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 14:39:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:47 compute-1 sudo[237730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:47 compute-1 sudo[237730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-1 sudo[237730]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:39:47 compute-1 sudo[237755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:39:47 compute-1 sudo[237755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-1 sudo[237755]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-1 sudo[237780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:47 compute-1 sudo[237780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-1 sudo[237780]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-1 sudo[237805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:39:47 compute-1 sudo[237805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:48 compute-1 sudo[237805]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:48 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:49 compute-1 ceph-mon[81715]: pgmap v2155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:39:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:39:49 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:50.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:51 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:51 compute-1 ceph-mon[81715]: pgmap v2156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:51 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:52 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:53 compute-1 ceph-mon[81715]: pgmap v2157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:53 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:53 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:54.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:54 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:39:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:55.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:39:55 compute-1 ceph-mon[81715]: pgmap v2158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:55 compute-1 sudo[237861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:55 compute-1 sudo[237861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:55 compute-1 sudo[237861]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:55 compute-1 sudo[237886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:39:55 compute-1 sudo[237886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:55 compute-1 sudo[237886]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:56.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:56 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:56 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:57 compute-1 ceph-mon[81715]: pgmap v2159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:57 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:58 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:58 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:58 compute-1 ceph-mon[81715]: pgmap v2160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:39:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:59 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:00 compute-1 podman[237911]: 2026-01-22 14:40:00.124509417 +0000 UTC m=+0.114366702 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 22 14:40:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:00.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:00 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 14:40:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 14:40:00 compute-1 ceph-mon[81715]: pgmap v2161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:00 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:01 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:02 compute-1 ceph-mon[81715]: pgmap v2162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:02 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:03 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:03 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:04 compute-1 ceph-mon[81715]: pgmap v2163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:04 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:05 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:07 compute-1 ceph-mon[81715]: pgmap v2164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:07 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 22 14:40:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:08 compute-1 ceph-mon[81715]: osdmap e154: 3 total, 3 up, 3 in
Jan 22 14:40:08 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:08 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:09 compute-1 ceph-mon[81715]: pgmap v2166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:09 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:10 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:11 compute-1 ceph-mon[81715]: pgmap v2167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:12 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 22 14:40:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:13 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:13 compute-1 ceph-mon[81715]: pgmap v2168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:13 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:13 compute-1 ceph-mon[81715]: osdmap e155: 3 total, 3 up, 3 in
Jan 22 14:40:14 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:14 compute-1 podman[237939]: 2026-01-22 14:40:14.551642168 +0000 UTC m=+0.079000068 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:40:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:15 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:15 compute-1 ceph-mon[81715]: pgmap v2170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:40:16 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:17 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:17 compute-1 ceph-mon[81715]: pgmap v2171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 22 14:40:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:18 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:18 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:40:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:40:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:19 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:19 compute-1 ceph-mon[81715]: pgmap v2172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-1 ceph-mon[81715]: pgmap v2173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:21 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:22 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:22 compute-1 ceph-mon[81715]: pgmap v2174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:22 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:23.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:24 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:25 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:25 compute-1 ceph-mon[81715]: pgmap v2175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:26 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:26.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:26.832 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:40:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:26.834 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:40:27 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:27 compute-1 ceph-mon[81715]: pgmap v2176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:27.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:28 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:28 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:29 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:29 compute-1 ceph-mon[81715]: pgmap v2177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:31 compute-1 podman[237959]: 2026-01-22 14:40:31.089174912 +0000 UTC m=+0.075933066 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:40:31 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:31 compute-1 ceph-mon[81715]: pgmap v2178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:32 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:33 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:33 compute-1 ceph-mon[81715]: pgmap v2179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:33 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:34 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:35 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:35 compute-1 ceph-mon[81715]: pgmap v2180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:35 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:35.837 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:40:36 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:36.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:37 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:37 compute-1 ceph-mon[81715]: pgmap v2181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:37.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.645350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837645414, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 967, "num_deletes": 256, "total_data_size": 1535158, "memory_usage": 1553008, "flush_reason": "Manual Compaction"}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837657010, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 1008549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64002, "largest_seqno": 64964, "table_properties": {"data_size": 1004293, "index_size": 1779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10872, "raw_average_key_size": 20, "raw_value_size": 995023, "raw_average_value_size": 1842, "num_data_blocks": 77, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092786, "oldest_key_time": 1769092786, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 11697 microseconds, and 2979 cpu microseconds.
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.657055) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 1008549 bytes OK
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.657073) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658398) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658417) EVENT_LOG_v1 {"time_micros": 1769092837658412, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1530149, prev total WAL file size 1530149, number of live WAL files 2.
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(984KB)], [126(11MB)]
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837659044, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12919755, "oldest_snapshot_seqno": -1}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 11110 keys, 12767119 bytes, temperature: kUnknown
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837722711, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 12767119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12702384, "index_size": 35886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 301007, "raw_average_key_size": 27, "raw_value_size": 12509374, "raw_average_value_size": 1125, "num_data_blocks": 1353, "num_entries": 11110, "num_filter_entries": 11110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.722981) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 12767119 bytes
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.724634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 200.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 11639, records dropped: 529 output_compression: NoCompression
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.724651) EVENT_LOG_v1 {"time_micros": 1769092837724643, "job": 80, "event": "compaction_finished", "compaction_time_micros": 63770, "compaction_time_cpu_micros": 27598, "output_level": 6, "num_output_files": 1, "total_output_size": 12767119, "num_input_records": 11639, "num_output_records": 11110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837724904, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837726928, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:38 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:39 compute-1 ceph-mon[81715]: pgmap v2182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:39 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:41 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:41 compute-1 ceph-mon[81715]: pgmap v2183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:42 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:42 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 22 14:40:43 compute-1 ceph-mon[81715]: pgmap v2184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:43 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:43 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:44 compute-1 ceph-mon[81715]: osdmap e156: 3 total, 3 up, 3 in
Jan 22 14:40:44 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:45 compute-1 podman[237986]: 2026-01-22 14:40:45.088498393 +0000 UTC m=+0.065216857 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:40:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:45 compute-1 ceph-mon[81715]: pgmap v2186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Jan 22 14:40:45 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:46 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:46 compute-1 ceph-mon[81715]: pgmap v2187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 1.4 KiB/s wr, 11 op/s
Jan 22 14:40:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 22 14:40:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.479 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.479 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:40:47 compute-1 ceph-mon[81715]: osdmap e157: 3 total, 3 up, 3 in
Jan 22 14:40:47 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:48 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:48 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:48 compute-1 ceph-mon[81715]: pgmap v2189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Jan 22 14:40:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:49 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:50 compute-1 sshd-session[238005]: banner exchange: Connection from 3.132.23.201 port 44434: invalid format
Jan 22 14:40:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:50 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:50 compute-1 ceph-mon[81715]: pgmap v2190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Jan 22 14:40:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:51.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:51 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 22 14:40:52 compute-1 ceph-mon[81715]: pgmap v2191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 44 op/s
Jan 22 14:40:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:53 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:53 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:53 compute-1 ceph-mon[81715]: osdmap e158: 3 total, 3 up, 3 in
Jan 22 14:40:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:54 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:54 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:54 compute-1 ceph-mon[81715]: pgmap v2193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Jan 22 14:40:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:55 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-1 sudo[238006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:55 compute-1 sudo[238006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:55 compute-1 sudo[238006]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:55 compute-1 sudo[238031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:40:55 compute-1 sudo[238031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:55 compute-1 sudo[238031]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:55 compute-1 sudo[238056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:55 compute-1 sudo[238056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:55 compute-1 sudo[238056]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:56 compute-1 sudo[238081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:40:56 compute-1 sudo[238081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:56 compute-1 sudo[238081]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:57 compute-1 ceph-mon[81715]: pgmap v2194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Jan 22 14:40:57 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:57.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:40:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:40:58 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:58 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:59 compute-1 ceph-mon[81715]: pgmap v2195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 22 14:40:59 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:40:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:59.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:00 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:01 compute-1 ceph-mon[81715]: pgmap v2196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:01 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:01.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:02 compute-1 podman[238137]: 2026-01-22 14:41:02.134035139 +0000 UTC m=+0.119516449 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 14:41:02 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:03 compute-1 ceph-mon[81715]: pgmap v2197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:03 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:03 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:03.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:03 compute-1 sudo[238163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:03 compute-1 sudo[238163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:03 compute-1 sudo[238163]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:03 compute-1 sudo[238188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:41:03 compute-1 sudo[238188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:03 compute-1 sudo[238188]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:41:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:41:04 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:05 compute-1 ceph-mon[81715]: pgmap v2198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:05 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:06 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:41:06 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:06 compute-1 ceph-mon[81715]: pgmap v2199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:07.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:07 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:08 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:08 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:08 compute-1 ceph-mon[81715]: pgmap v2200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:09 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:10 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:10 compute-1 ceph-mon[81715]: pgmap v2201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:11 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:11 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:12 compute-1 ceph-mon[81715]: pgmap v2202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:12 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:12 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:13 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:14 compute-1 ceph-mon[81715]: pgmap v2203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:14 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:15.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:15 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:16 compute-1 podman[238214]: 2026-01-22 14:41:16.06465035 +0000 UTC m=+0.051636040 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 14:41:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:16.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:16 compute-1 ceph-mon[81715]: pgmap v2204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:16 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.663832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877663882, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 807, "num_deletes": 251, "total_data_size": 1239692, "memory_usage": 1257640, "flush_reason": "Manual Compaction"}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877672162, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 597682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64969, "largest_seqno": 65771, "table_properties": {"data_size": 594250, "index_size": 1147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10006, "raw_average_key_size": 21, "raw_value_size": 586609, "raw_average_value_size": 1264, "num_data_blocks": 49, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092838, "oldest_key_time": 1769092838, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 8383 microseconds, and 4687 cpu microseconds.
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.672216) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 597682 bytes OK
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.672238) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673795) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673823) EVENT_LOG_v1 {"time_micros": 1769092877673815, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673844) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1235340, prev total WAL file size 1235340, number of live WAL files 2.
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.674884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373537' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(583KB)], [129(12MB)]
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877674986, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13364801, "oldest_snapshot_seqno": -1}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 11069 keys, 9679072 bytes, temperature: kUnknown
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877737725, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 9679072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9618895, "index_size": 31392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 300663, "raw_average_key_size": 27, "raw_value_size": 9430810, "raw_average_value_size": 852, "num_data_blocks": 1165, "num_entries": 11069, "num_filter_entries": 11069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.737970) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9679072 bytes
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.8 rd, 154.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(38.6) write-amplify(16.2) OK, records in: 11574, records dropped: 505 output_compression: NoCompression
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739281) EVENT_LOG_v1 {"time_micros": 1769092877739273, "job": 82, "event": "compaction_finished", "compaction_time_micros": 62810, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 9679072, "num_input_records": 11574, "num_output_records": 11069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877739484, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877742014, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.674786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:17 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:17 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:41:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:41:18 compute-1 ceph-mon[81715]: pgmap v2205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:18 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:20 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:21 compute-1 ceph-mon[81715]: pgmap v2206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:21 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:22 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:23 compute-1 ceph-mon[81715]: pgmap v2207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:23 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:23 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:24 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:25 compute-1 ceph-mon[81715]: pgmap v2208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:25 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:26 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:26.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:27 compute-1 ceph-mon[81715]: pgmap v2209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:27 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:28 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:28 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:29 compute-1 ceph-mon[81715]: pgmap v2210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:29 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:29 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:29.628 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:41:29 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:29.629 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:41:30 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:31 compute-1 ceph-mon[81715]: pgmap v2211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:31 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:31.631 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:41:32 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:33 compute-1 podman[238235]: 2026-01-22 14:41:33.097437896 +0000 UTC m=+0.082366722 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:41:33 compute-1 ceph-mon[81715]: pgmap v2212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:33 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:33 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:34 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:35 compute-1 ceph-mon[81715]: pgmap v2213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:35 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:35.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:36 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:37.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:37 compute-1 ceph-mon[81715]: pgmap v2214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:37 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:38 compute-1 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:38 compute-1 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:38 compute-1 ceph-mon[81715]: pgmap v2215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:39 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:39 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:40 compute-1 ceph-mon[81715]: pgmap v2216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:40 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:41.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:41 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:42 compute-1 ceph-mon[81715]: pgmap v2217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:42 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:42 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:43 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:44 compute-1 ceph-mon[81715]: pgmap v2218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:44 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:45 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:47 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:47 compute-1 ceph-mon[81715]: pgmap v2219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:47 compute-1 podman[238261]: 2026-01-22 14:41:47.0556513 +0000 UTC m=+0.051629210 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:41:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:48 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:48 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:49 compute-1 ceph-mon[81715]: pgmap v2220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:49 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:49.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:50 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:51 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:51 compute-1 ceph-mon[81715]: pgmap v2221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:52 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:52.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:53 compute-1 ceph-mon[81715]: pgmap v2222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:53 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:53 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:54 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:54.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:55 compute-1 ceph-mon[81715]: pgmap v2223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:55 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:56 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:57 compute-1 ceph-mon[81715]: pgmap v2224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:57 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:58 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:58 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:59 compute-1 ceph-mon[81715]: pgmap v2225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:59 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:41:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:00 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:00.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:01 compute-1 ceph-mon[81715]: pgmap v2226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:42:01 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:02 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:02.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:03 compute-1 ceph-mon[81715]: pgmap v2227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:03 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:03 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:04 compute-1 podman[238281]: 2026-01-22 14:42:04.123526375 +0000 UTC m=+0.106644450 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:42:04 compute-1 sudo[238307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:04 compute-1 sudo[238307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-1 sudo[238307]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-1 sudo[238333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:42:04 compute-1 sudo[238333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-1 sudo[238333]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-1 sudo[238358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:04 compute-1 sudo[238358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-1 sudo[238358]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-1 sudo[238383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:42:04 compute-1 sudo[238383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 22 14:42:04 compute-1 sudo[238383]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:05 compute-1 ceph-mon[81715]: pgmap v2228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:05 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:05 compute-1 ceph-mon[81715]: osdmap e159: 3 total, 3 up, 3 in
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:42:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:42:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:06 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:06 compute-1 ceph-mon[81715]: pgmap v2230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 14:42:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:07 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:08 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:08 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:08 compute-1 ceph-mon[81715]: pgmap v2231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 14:42:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:09 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:11 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:11 compute-1 ceph-mon[81715]: pgmap v2232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:12 compute-1 sudo[238440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:12 compute-1 sudo[238440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:12 compute-1 sudo[238440]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:12 compute-1 sudo[238465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:42:12 compute-1 sudo[238465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:12 compute-1 sudo[238465]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:12 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:12 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:12.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:13 compute-1 ceph-mon[81715]: pgmap v2233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:13 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:13 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:14 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:15 compute-1 ceph-mon[81715]: pgmap v2234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:15 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:16 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:17 compute-1 ceph-mon[81715]: pgmap v2235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Jan 22 14:42:17 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:18 compute-1 podman[238490]: 2026-01-22 14:42:18.060335734 +0000 UTC m=+0.052587136 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 22 14:42:18 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:18 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:42:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:42:19 compute-1 ceph-mon[81715]: pgmap v2236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:42:19 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:19.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:20 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:20.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:21 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:21 compute-1 ceph-mon[81715]: pgmap v2237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:42:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:21.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 22 14:42:22 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:23 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:23 compute-1 ceph-mon[81715]: osdmap e160: 3 total, 3 up, 3 in
Jan 22 14:42:23 compute-1 ceph-mon[81715]: pgmap v2239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:23 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:24 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:25 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:25 compute-1 ceph-mon[81715]: pgmap v2240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Jan 22 14:42:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:26 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:27.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:27 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:27 compute-1 ceph-mon[81715]: pgmap v2241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 14:42:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 22 14:42:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:28 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:28 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:28 compute-1 ceph-mon[81715]: osdmap e161: 3 total, 3 up, 3 in
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.575691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948575761, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1222, "num_deletes": 252, "total_data_size": 2043289, "memory_usage": 2076368, "flush_reason": "Manual Compaction"}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948639748, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1341494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65776, "largest_seqno": 66993, "table_properties": {"data_size": 1336559, "index_size": 2266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13171, "raw_average_key_size": 20, "raw_value_size": 1325558, "raw_average_value_size": 2097, "num_data_blocks": 98, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092877, "oldest_key_time": 1769092877, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 64308 microseconds, and 4130 cpu microseconds.
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.640007) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1341494 bytes OK
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.640120) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641506) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641530) EVENT_LOG_v1 {"time_micros": 1769092948641521, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2037243, prev total WAL file size 2037243, number of live WAL files 2.
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.643285) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1310KB)], [132(9452KB)]
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948643356, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 11020566, "oldest_snapshot_seqno": -1}
Jan 22 14:42:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:28.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 11180 keys, 9369009 bytes, temperature: kUnknown
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948715810, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 9369009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9308578, "index_size": 31390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 304170, "raw_average_key_size": 27, "raw_value_size": 9118918, "raw_average_value_size": 815, "num_data_blocks": 1161, "num_entries": 11180, "num_filter_entries": 11180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.716213) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9369009 bytes
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.717531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 11701, records dropped: 521 output_compression: NoCompression
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.717546) EVENT_LOG_v1 {"time_micros": 1769092948717539, "job": 84, "event": "compaction_finished", "compaction_time_micros": 72511, "compaction_time_cpu_micros": 45970, "output_level": 6, "num_output_files": 1, "total_output_size": 9369009, "num_input_records": 11701, "num_output_records": 11180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948717873, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948719339, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.643209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:30 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:30 compute-1 ceph-mon[81715]: pgmap v2243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:42:30 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:30.495 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:42:30 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:30.496 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:42:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:30.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:31 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:31 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:31 compute-1 ceph-mon[81715]: pgmap v2244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:42:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:31.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:32 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:32.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:33 compute-1 ceph-mon[81715]: pgmap v2245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 14:42:33 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:33 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:34 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:35 compute-1 podman[238510]: 2026-01-22 14:42:35.076076604 +0000 UTC m=+0.069162994 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 14:42:35 compute-1 ceph-mon[81715]: pgmap v2246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Jan 22 14:42:35 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:36 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:36.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:37 compute-1 ceph-mon[81715]: pgmap v2247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:37 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:38 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:38 compute-1 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:38.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:39 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:39.498 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:42:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:39.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:39 compute-1 ceph-mon[81715]: pgmap v2248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:39 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:40 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:40 compute-1 ceph-mon[81715]: pgmap v2249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:40.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:41 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:42.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:42 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:42 compute-1 ceph-mon[81715]: pgmap v2250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:43 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:43 compute-1 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:44.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:44 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:44 compute-1 ceph-mon[81715]: pgmap v2251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:45 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:46 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-1 ceph-mon[81715]: pgmap v2252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:46 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:42:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:47 compute-1 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:47 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:48 compute-1 ceph-mon[81715]: pgmap v2253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:48 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:49 compute-1 podman[238536]: 2026-01-22 14:42:49.083630702 +0000 UTC m=+0.069938086 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:42:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:49 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:50.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:50 compute-1 ceph-mon[81715]: pgmap v2254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:50 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:51 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:52 compute-1 ceph-mon[81715]: pgmap v2255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:52 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:52 compute-1 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:53.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:53 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:54 compute-1 ceph-mon[81715]: pgmap v2256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:54 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:55.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:55 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:56.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:56 compute-1 ceph-mon[81715]: pgmap v2257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:56 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:57 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:57 compute-1 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:58.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:58 compute-1 ceph-mon[81715]: pgmap v2258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:58 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:42:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:00 compute-1 ceph-mon[81715]: pgmap v2259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:00 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:01.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:02.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:03 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:03 compute-1 ceph-mon[81715]: pgmap v2260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:03 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:04 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:04.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:05 compute-1 ceph-mon[81715]: pgmap v2261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:05 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:06 compute-1 podman[238555]: 2026-01-22 14:43:06.126123479 +0000 UTC m=+0.114474793 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:43:06 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:06 compute-1 ceph-mon[81715]: pgmap v2262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:06.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:07 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:08 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:08 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:08 compute-1 ceph-mon[81715]: pgmap v2263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:08.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:09 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:10.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:10 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:10 compute-1 ceph-mon[81715]: pgmap v2264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:11 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:11 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:12 compute-1 sudo[238582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:12 compute-1 sudo[238582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-1 sudo[238582]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-1 sudo[238607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:43:12 compute-1 sudo[238607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-1 sudo[238607]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-1 sudo[238632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:12 compute-1 sudo[238632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-1 sudo[238632]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-1 sudo[238657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:43:12 compute-1 sudo[238657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:13 compute-1 ceph-mon[81715]: pgmap v2265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:13 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:13 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:13 compute-1 sudo[238657]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:43:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:43:14 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:14.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:15 compute-1 ceph-mon[81715]: pgmap v2266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:15 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:16 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:17 compute-1 ceph-mon[81715]: pgmap v2267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:17 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:18 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:18 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:43:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:43:18 compute-1 ceph-mon[81715]: pgmap v2268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:18 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:20 compute-1 podman[238712]: 2026-01-22 14:43:20.048498636 +0000 UTC m=+0.045969326 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:43:20 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:21 compute-1 ceph-mon[81715]: pgmap v2269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:21 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:22 compute-1 sudo[238733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:22 compute-1 sudo[238733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:22 compute-1 sudo[238733]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:22 compute-1 sudo[238758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:43:22 compute-1 sudo[238758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:22 compute-1 sudo[238758]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:22 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:22.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:23.040 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:43:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:23.040 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:43:23 compute-1 ceph-mon[81715]: pgmap v2270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:23 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:23 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:24 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:24 compute-1 ceph-mon[81715]: pgmap v2271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:25 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:26 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:26 compute-1 ceph-mon[81715]: pgmap v2272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:27.042 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:43:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:27 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:27 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:29 compute-1 ceph-mon[81715]: pgmap v2273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:30 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:31 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:31 compute-1 ceph-mon[81715]: pgmap v2274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:32 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:33 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:33 compute-1 ceph-mon[81715]: pgmap v2275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:33 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:34 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:35 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:35 compute-1 ceph-mon[81715]: pgmap v2276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:36 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:37 compute-1 podman[238783]: 2026-01-22 14:43:37.09157283 +0000 UTC m=+0.087864772 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:43:37 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:37 compute-1 ceph-mon[81715]: pgmap v2277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:38 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:38 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:39 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:39 compute-1 ceph-mon[81715]: pgmap v2278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:39 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:40 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:40 compute-1 ceph-mon[81715]: pgmap v2279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:42 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:43 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:43 compute-1 ceph-mon[81715]: pgmap v2280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:43 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:44 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:45 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:45 compute-1 ceph-mon[81715]: pgmap v2281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:46 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:47 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:47 compute-1 ceph-mon[81715]: pgmap v2282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:43:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:48 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:48 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:49 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:49 compute-1 ceph-mon[81715]: pgmap v2283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:50 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:51 compute-1 podman[238809]: 2026-01-22 14:43:51.093028248 +0000 UTC m=+0.075426654 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:43:51 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:51 compute-1 ceph-mon[81715]: pgmap v2284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:52 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:53 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:53 compute-1 ceph-mon[81715]: pgmap v2285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:53 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:54 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:55 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:55 compute-1 ceph-mon[81715]: pgmap v2286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:56 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:57 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:57 compute-1 ceph-mon[81715]: pgmap v2287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:58 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:58 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:43:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:59 compute-1 ceph-mon[81715]: pgmap v2288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:00 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:00 compute-1 ceph-mon[81715]: pgmap v2289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:01.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:01 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:03 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:03 compute-1 ceph-mon[81715]: pgmap v2290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:03 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:04 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:05 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:05 compute-1 ceph-mon[81715]: pgmap v2291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:05.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:06 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:06 compute-1 ceph-mon[81715]: pgmap v2292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:07 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:08 compute-1 podman[238828]: 2026-01-22 14:44:08.098310538 +0000 UTC m=+0.085866128 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:44:08 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:08 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:08 compute-1 ceph-mon[81715]: pgmap v2293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:09.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:09 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:10 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:10 compute-1 ceph-mon[81715]: pgmap v2294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:11 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:12 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:12 compute-1 ceph-mon[81715]: pgmap v2295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:13 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:13 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:14 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:14 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:14 compute-1 ceph-mon[81715]: pgmap v2296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:14 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:14.901 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:44:14 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:14.902 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:44:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:15.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:15 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:17 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:17 compute-1 ceph-mon[81715]: pgmap v2297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:17.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:18 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:18 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:44:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:44:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:19 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:19 compute-1 ceph-mon[81715]: pgmap v2298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:19.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:20 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:21 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:21 compute-1 ceph-mon[81715]: pgmap v2299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:21 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:22 compute-1 podman[238856]: 2026-01-22 14:44:22.055481454 +0000 UTC m=+0.050845948 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 14:44:22 compute-1 sudo[238876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:22 compute-1 sudo[238876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-1 sudo[238876]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-1 sudo[238901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:44:22 compute-1 sudo[238901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-1 sudo[238901]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-1 sudo[238926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:22 compute-1 sudo[238926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-1 sudo[238926]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-1 sudo[238951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:44:22 compute-1 sudo[238951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:22 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:22.904 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:44:22 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:22 compute-1 ceph-mon[81715]: pgmap v2300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:23 compute-1 sudo[238951]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:24 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:24 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:25 compute-1 ceph-mon[81715]: pgmap v2301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:25 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:26 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-1 ceph-mon[81715]: pgmap v2302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:44:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:44:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:27 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:28 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:28 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:28 compute-1 ceph-mon[81715]: pgmap v2303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:28 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:29.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:29 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:30 compute-1 ceph-mon[81715]: pgmap v2304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:30 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:31 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:32 compute-1 ceph-mon[81715]: pgmap v2305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:32 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:33 compute-1 sudo[239006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:33 compute-1 sudo[239006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:33 compute-1 sudo[239006]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:33 compute-1 sudo[239031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:44:33 compute-1 sudo[239031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:33 compute-1 sudo[239031]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:33 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:33 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.5 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3637 syncs, 3.29 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1012 writes, 2061 keys, 1012 commit groups, 1.0 writes per commit group, ingest: 0.95 MB, 0.00 MB/s
                                           Interval WAL: 1012 writes, 474 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:44:34 compute-1 ceph-mon[81715]: pgmap v2306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:34 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:35.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:35 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:37 compute-1 ceph-mon[81715]: pgmap v2307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:37 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:38 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:38 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:39 compute-1 ceph-mon[81715]: pgmap v2308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:39 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:39 compute-1 podman[239056]: 2026-01-22 14:44:39.119804022 +0000 UTC m=+0.105813748 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 14:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:40 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:41 compute-1 ceph-mon[81715]: pgmap v2309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 14:44:41 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:42 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:43 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:43 compute-1 ceph-mon[81715]: pgmap v2310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 14:44:43 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:44 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:44 compute-1 ceph-mon[81715]: pgmap v2311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 694 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 614 KiB/s wr, 13 op/s
Jan 22 14:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:45 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:45 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:46 compute-1 ceph-mon[81715]: pgmap v2312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:46 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:44:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:48 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:48 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:49 compute-1 ceph-mon[81715]: pgmap v2313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:49 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:50 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:44:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:44:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:51 compute-1 ceph-mon[81715]: pgmap v2314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:51 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:51.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:52 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:53 compute-1 podman[239083]: 2026-01-22 14:44:53.074168995 +0000 UTC m=+0.065191457 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 14:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:53 compute-1 ceph-mon[81715]: pgmap v2315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 22 14:44:53 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:53 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:53.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:54 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:55 compute-1 ceph-mon[81715]: pgmap v2316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 711 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 22 14:44:55 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:56 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:56.424 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:44:56 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:44:56.424 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:44:56 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:57 compute-1 ceph-mon[81715]: pgmap v2317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Jan 22 14:44:57 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:58 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:58 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:58 compute-1 ceph-mon[81715]: pgmap v2318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:00 compute-1 ceph-mon[81715]: pgmap v2319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:00 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:01 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:02 compute-1 ceph-mon[81715]: pgmap v2320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:02 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:02 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:04 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:04.426 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:45:05 compute-1 ceph-mon[81715]: pgmap v2321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:05 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:05.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:06 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:07 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:07 compute-1 ceph-mon[81715]: pgmap v2322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 14:45:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.916111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107916153, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 257, "total_data_size": 4497747, "memory_usage": 4561632, "flush_reason": "Manual Compaction"}
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107933612, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 2933109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66998, "largest_seqno": 69297, "table_properties": {"data_size": 2924499, "index_size": 4911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21758, "raw_average_key_size": 21, "raw_value_size": 2905497, "raw_average_value_size": 2807, "num_data_blocks": 213, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092948, "oldest_key_time": 1769092948, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 17586 microseconds, and 6790 cpu microseconds.
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.933699) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 2933109 bytes OK
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.933719) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934925) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934938) EVENT_LOG_v1 {"time_micros": 1769093107934933, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4487194, prev total WAL file size 4487194, number of live WAL files 2.
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.935990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(2864KB)], [135(9149KB)]
Jan 22 14:45:07 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107936040, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 12302118, "oldest_snapshot_seqno": -1}
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 11688 keys, 12155432 bytes, temperature: kUnknown
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108003775, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12155432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089398, "index_size": 35713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 316715, "raw_average_key_size": 27, "raw_value_size": 11888466, "raw_average_value_size": 1017, "num_data_blocks": 1339, "num_entries": 11688, "num_filter_entries": 11688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.004054) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12155432 bytes
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.005276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 179.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 12215, records dropped: 527 output_compression: NoCompression
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.005295) EVENT_LOG_v1 {"time_micros": 1769093108005286, "job": 86, "event": "compaction_finished", "compaction_time_micros": 67823, "compaction_time_cpu_micros": 27907, "output_level": 6, "num_output_files": 1, "total_output_size": 12155432, "num_input_records": 12215, "num_output_records": 11688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108005944, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108007597, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.935943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:08 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:09 compute-1 ceph-mon[81715]: pgmap v2323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:09 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:10 compute-1 podman[239105]: 2026-01-22 14:45:10.160935131 +0000 UTC m=+0.142583714 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:45:10 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:11 compute-1 ceph-mon[81715]: pgmap v2324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:11 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:12 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:12 compute-1 ceph-mon[81715]: pgmap v2325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:13 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:13 compute-1 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:13 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:14 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:14 compute-1 ceph-mon[81715]: pgmap v2326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:15 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:17 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:17 compute-1 ceph-mon[81715]: pgmap v2327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:18 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:18 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:45:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:45:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:45:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.713867) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118713923, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 251, "total_data_size": 292936, "memory_usage": 300440, "flush_reason": "Manual Compaction"}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118717189, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 192034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69302, "largest_seqno": 69686, "table_properties": {"data_size": 189789, "index_size": 344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5891, "raw_average_key_size": 18, "raw_value_size": 185289, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093108, "oldest_key_time": 1769093108, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3358 microseconds, and 1214 cpu microseconds.
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717229) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 192034 bytes OK
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717247) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718554) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718571) EVENT_LOG_v1 {"time_micros": 1769093118718565, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718586) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 290358, prev total WAL file size 290358, number of live WAL files 2.
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(187KB)], [138(11MB)]
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118719006, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12347466, "oldest_snapshot_seqno": -1}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 11488 keys, 10715333 bytes, temperature: kUnknown
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118798253, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10715333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651715, "index_size": 33809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28741, "raw_key_size": 313328, "raw_average_key_size": 27, "raw_value_size": 10455224, "raw_average_value_size": 910, "num_data_blocks": 1254, "num_entries": 11488, "num_filter_entries": 11488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.798484) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10715333 bytes
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.800131) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.6 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(120.1) write-amplify(55.8) OK, records in: 11999, records dropped: 511 output_compression: NoCompression
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.800147) EVENT_LOG_v1 {"time_micros": 1769093118800140, "job": 88, "event": "compaction_finished", "compaction_time_micros": 79330, "compaction_time_cpu_micros": 35215, "output_level": 6, "num_output_files": 1, "total_output_size": 10715333, "num_input_records": 11999, "num_output_records": 11488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118800376, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118802859, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.803001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:45:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:45:19 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:19 compute-1 ceph-mon[81715]: pgmap v2328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:20 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:21 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:21 compute-1 ceph-mon[81715]: pgmap v2329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:22 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:23 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:23 compute-1 ceph-mon[81715]: pgmap v2330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:23 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:24 compute-1 podman[239133]: 2026-01-22 14:45:24.049389999 +0000 UTC m=+0.045299288 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 22 14:45:24 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:25 compute-1 ceph-mon[81715]: pgmap v2331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:25 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:26 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:27 compute-1 ceph-mon[81715]: pgmap v2332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:27 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:27.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:28 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:28 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:30 compute-1 ceph-mon[81715]: pgmap v2333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:30 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-1 ceph-mon[81715]: pgmap v2334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:31 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 69K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1872 writes, 9897 keys, 1872 commit groups, 1.0 writes per commit group, ingest: 16.53 MB, 0.03 MB/s
                                           Interval WAL: 1872 writes, 1872 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     63.8      1.19              0.24        44    0.027       0      0       0.0       0.0
                                             L6      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.2    134.3    115.1      3.45              1.12        43    0.080    364K    23K       0.0       0.0
                                            Sum      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.2     99.7    101.9      4.64              1.36        87    0.053    364K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    143.6    146.2      0.65              0.28        16    0.041     92K   4158       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    134.3    115.1      3.45              1.12        43    0.080    364K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     63.9      1.19              0.24        43    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.46 GB write, 0.11 MB/s write, 0.45 GB read, 0.11 MB/s read, 4.6 seconds
                                           Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 50.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000213 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2671,48.27 MB,15.8792%) FilterBlock(87,1018.30 KB,0.327115%) IndexBlock(87,1.34 MB,0.440181%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:31.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:32 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:33 compute-1 ceph-mon[81715]: pgmap v2335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:33 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:33 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:33 compute-1 sudo[239152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:33 compute-1 sudo[239152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-1 sudo[239152]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-1 sudo[239177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:45:33 compute-1 sudo[239177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-1 sudo[239177]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:33 compute-1 sudo[239202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:33 compute-1 sudo[239202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-1 sudo[239202]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-1 sudo[239227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:45:33 compute-1 sudo[239227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:34 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:34 compute-1 podman[239324]: 2026-01-22 14:45:34.482792088 +0000 UTC m=+0.079295709 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 14:45:34 compute-1 podman[239324]: 2026-01-22 14:45:34.566521406 +0000 UTC m=+0.163025017 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 14:45:34 compute-1 sudo[239227]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:35 compute-1 ceph-mon[81715]: pgmap v2336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:35 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:35 compute-1 sudo[239448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:35 compute-1 sudo[239448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:35 compute-1 sudo[239448]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:35 compute-1 sudo[239473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:35 compute-1 sudo[239473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:35 compute-1 sudo[239473]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:35 compute-1 sudo[239498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:35 compute-1 sudo[239498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:35 compute-1 sudo[239498]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:35 compute-1 sudo[239523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:45:35 compute-1 sudo[239523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:35 compute-1 sudo[239523]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:36 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:45:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:45:37 compute-1 ceph-mon[81715]: pgmap v2337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:37 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:38 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:38 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:39 compute-1 ceph-mon[81715]: pgmap v2338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:39 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:39.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:40 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:41 compute-1 podman[239579]: 2026-01-22 14:45:41.118434331 +0000 UTC m=+0.097459482 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 14:45:41 compute-1 ceph-mon[81715]: pgmap v2339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:41 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:41.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:42 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:42 compute-1 sudo[239608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:42 compute-1 sudo[239608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:42 compute-1 sudo[239608]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:42 compute-1 sudo[239633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:45:42 compute-1 sudo[239633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:42 compute-1 sudo[239633]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:43 compute-1 ceph-mon[81715]: pgmap v2340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:43 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:43 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:43.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:44 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:45 compute-1 ceph-mon[81715]: pgmap v2341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:45 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:45.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:46 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:47 compute-1 ceph-mon[81715]: pgmap v2342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:47 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.483 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:45:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:47.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:48 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:48 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:49 compute-1 ceph-mon[81715]: pgmap v2343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:49 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:50 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:51 compute-1 ceph-mon[81715]: pgmap v2344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:51 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:52 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:53 compute-1 ceph-mon[81715]: pgmap v2345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:53 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:53 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:54 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:55 compute-1 podman[239658]: 2026-01-22 14:45:55.064415068 +0000 UTC m=+0.049872461 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:55 compute-1 ceph-mon[81715]: pgmap v2346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:55 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:56 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:57 compute-1 ceph-mon[81715]: pgmap v2347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:57 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:57.704 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:45:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:45:57.705 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:45:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:58 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:58 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:59 compute-1 ceph-mon[81715]: pgmap v2348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:59 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:45:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:59.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:46:00.707 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:46:00 compute-1 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:00 compute-1 ceph-mon[81715]: pgmap v2349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:01 compute-1 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:01 compute-1 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:02 compute-1 ceph-mon[81715]: pgmap v2350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:02 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:03 compute-1 ceph-mon[81715]: Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:03 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:04 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:04 compute-1 ceph-mon[81715]: pgmap v2351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:05.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:05 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:07 compute-1 ceph-mon[81715]: pgmap v2352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:07 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:08 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:08 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:09 compute-1 ceph-mon[81715]: pgmap v2353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:09 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:10 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:11 compute-1 ceph-mon[81715]: pgmap v2354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:11 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 14:46:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:11.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 14:46:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:12 compute-1 podman[239678]: 2026-01-22 14:46:12.096975976 +0000 UTC m=+0.085104836 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:46:12 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:13 compute-1 ceph-mon[81715]: pgmap v2355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:13 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:13 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:14 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:15 compute-1 ceph-mon[81715]: pgmap v2356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:15 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:16 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:16 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 14:46:17 compute-1 ceph-mon[81715]: pgmap v2357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:17 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:18 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:18 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:18 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 14:46:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:46:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:46:19 compute-1 ceph-mon[81715]: pgmap v2358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:19 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:20 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:21.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:21 compute-1 ceph-mon[81715]: pgmap v2359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:21 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:22 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:23.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:23 compute-1 ceph-mon[81715]: pgmap v2360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:23 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:23 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:24 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:25 compute-1 ceph-mon[81715]: pgmap v2361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:25 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:26 compute-1 podman[239704]: 2026-01-22 14:46:26.088369954 +0000 UTC m=+0.071996982 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:46:26 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:27 compute-1 ceph-mon[81715]: pgmap v2362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:27 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:28 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:28 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:29 compute-1 ceph-mon[81715]: pgmap v2363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:29 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:30 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:31.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:31 compute-1 ceph-mon[81715]: pgmap v2364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:31 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:32 compute-1 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:33 compute-1 ceph-mon[81715]: pgmap v2365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:33 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:33 compute-1 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:34 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:34 compute-1 ceph-mon[81715]: pgmap v2366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 22 14:46:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:35 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:36 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:36 compute-1 ceph-mon[81715]: pgmap v2367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:37 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:38 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:38 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:38 compute-1 ceph-mon[81715]: pgmap v2368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:39 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:40 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:40 compute-1 ceph-mon[81715]: pgmap v2369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:41 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:41 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:42 compute-1 sshd-session[239724]: Connection closed by 154.41.135.50 port 16458 [preauth]
Jan 22 14:46:42 compute-1 sudo[239726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:42 compute-1 sudo[239726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-1 sudo[239726]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-1 sudo[239760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:46:42 compute-1 sudo[239760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-1 sudo[239760]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-1 podman[239750]: 2026-01-22 14:46:42.613201047 +0000 UTC m=+0.082247510 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:46:42 compute-1 sudo[239802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:42 compute-1 sudo[239802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-1 sudo[239802]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-1 sudo[239828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:46:42 compute-1 sudo[239828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:42 compute-1 ceph-mon[81715]: pgmap v2370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:42 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:43 compute-1 sudo[239828]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:43 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:43 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:46:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:46:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:46:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:46:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:46:44 compute-1 ceph-mon[81715]: pgmap v2371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:44 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:45 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:46 compute-1 ceph-mon[81715]: pgmap v2372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 8 op/s
Jan 22 14:46:46 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:46:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:46:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:47 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:47 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:48 compute-1 ceph-mon[81715]: pgmap v2373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:50 compute-1 sudo[239882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:50 compute-1 sudo[239882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:50 compute-1 sudo[239882]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:50 compute-1 sudo[239907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:46:50 compute-1 sudo[239907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:50 compute-1 sudo[239907]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:50 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:50 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:50 compute-1 ceph-mon[81715]: pgmap v2374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:51 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:52 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:52 compute-1 ceph-mon[81715]: pgmap v2375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:52 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:53 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:53 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:53.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:54 compute-1 ceph-mon[81715]: pgmap v2376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:54 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:55 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:56 compute-1 ceph-mon[81715]: pgmap v2377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:56 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:57 compute-1 podman[239932]: 2026-01-22 14:46:57.068966723 +0000 UTC m=+0.056783689 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 14:46:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:57 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:58 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:58 compute-1 ceph-mon[81715]: pgmap v2378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:58 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:46:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:59 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:00 compute-1 ceph-mon[81715]: pgmap v2379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 14:47:00 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:01 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:01.874 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:47:01 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:01.875 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:47:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:02 compute-1 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:03 compute-1 ceph-mon[81715]: pgmap v2380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 14:47:03 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:03 compute-1 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:04 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:05 compute-1 ceph-mon[81715]: pgmap v2381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 22 14:47:05 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:06 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:06 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:06.878 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:47:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:07 compute-1 ceph-mon[81715]: pgmap v2382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:07 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:08 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:08 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:08 compute-1 ceph-mon[81715]: pgmap v2383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:09 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:10 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:10 compute-1 ceph-mon[81715]: pgmap v2384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:11 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:12 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:12 compute-1 ceph-mon[81715]: pgmap v2385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:47:13 compute-1 podman[239951]: 2026-01-22 14:47:13.130576266 +0000 UTC m=+0.121060990 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 14:47:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:13 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:13 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:13 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:14 compute-1 ceph-mon[81715]: pgmap v2386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:47:14 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:16 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:17 compute-1 sshd-session[239979]: Connection closed by 54.89.106.110 port 15858 [preauth]
Jan 22 14:47:17 compute-1 ceph-mon[81715]: pgmap v2387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Jan 22 14:47:17 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:18 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:18 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:47:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:47:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:47:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:47:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:47:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:47:19 compute-1 ceph-mon[81715]: pgmap v2388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:19 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:19.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:19.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:20 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:21 compute-1 ceph-mon[81715]: pgmap v2389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:21 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:21.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:21.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:22 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:23 compute-1 ceph-mon[81715]: pgmap v2390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:23 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:23 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:24 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:24 compute-1 ceph-mon[81715]: pgmap v2391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:25 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:27 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:27 compute-1 ceph-mon[81715]: pgmap v2392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:27.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:28 compute-1 podman[239981]: 2026-01-22 14:47:28.101941503 +0000 UTC m=+0.086833354 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:47:28 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:28 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:28 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:29 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:29 compute-1 ceph-mon[81715]: pgmap v2393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:30 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:31 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:31 compute-1 ceph-mon[81715]: pgmap v2394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:32 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:33 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:33 compute-1 ceph-mon[81715]: pgmap v2395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:33 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:34.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:34 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:35.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:35 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:35 compute-1 ceph-mon[81715]: pgmap v2396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:36.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:36 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:36 compute-1 ceph-mon[81715]: pgmap v2397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:36 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:37.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:37 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:38.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:39 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:39 compute-1 ceph-mon[81715]: pgmap v2398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:39 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:40.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:40 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:41 compute-1 ceph-mon[81715]: pgmap v2399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:41 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:42.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:42 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:43 compute-1 ceph-mon[81715]: pgmap v2400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:43 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:43 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:44.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:44 compute-1 podman[239999]: 2026-01-22 14:47:44.107698193 +0000 UTC m=+0.104544943 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:47:44 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:45.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:45 compute-1 ceph-mon[81715]: pgmap v2401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:45 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:46 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:47:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:47:47 compute-1 ceph-mon[81715]: pgmap v2402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:47 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:48 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:48 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:49.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:49 compute-1 ceph-mon[81715]: pgmap v2403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:49 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:50 compute-1 sudo[240025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-1 sudo[240025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240025]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:47:50 compute-1 sudo[240050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240050]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-1 sudo[240075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240075]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:47:50 compute-1 sudo[240100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:50 compute-1 sudo[240100]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-1 sudo[240144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240144]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:47:50 compute-1 sudo[240169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240169]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-1 sudo[240194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-1 sudo[240194]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-1 sudo[240219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:47:50 compute-1 sudo[240219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:51 compute-1 sudo[240219]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:51 compute-1 ceph-mon[81715]: pgmap v2404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:47:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:47:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:52 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:52 compute-1 ceph-mon[81715]: pgmap v2405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:53 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:53 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:55 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:55 compute-1 ceph-mon[81715]: pgmap v2406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:56 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:56 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:57 compute-1 ceph-mon[81715]: pgmap v2407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:57 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:58 compute-1 sudo[240275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:58 compute-1 sudo[240275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:58 compute-1 sudo[240275]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:58 compute-1 sudo[240301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:47:58 compute-1 sudo[240301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:58 compute-1 sudo[240301]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:58 compute-1 podman[240299]: 2026-01-22 14:47:58.336653856 +0000 UTC m=+0.082477245 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 14:47:58 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:58 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:47:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:59 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:59 compute-1 ceph-mon[81715]: pgmap v2408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:00 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.878590) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280878637, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2436, "num_deletes": 251, "total_data_size": 4754012, "memory_usage": 4825040, "flush_reason": "Manual Compaction"}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280895373, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3081552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69691, "largest_seqno": 72122, "table_properties": {"data_size": 3072496, "index_size": 5229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23031, "raw_average_key_size": 21, "raw_value_size": 3052524, "raw_average_value_size": 2823, "num_data_blocks": 226, "num_entries": 1081, "num_filter_entries": 1081, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093119, "oldest_key_time": 1769093119, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 16838 microseconds, and 8103 cpu microseconds.
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.895431) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3081552 bytes OK
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.895454) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897500) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897516) EVENT_LOG_v1 {"time_micros": 1769093280897511, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 4742925, prev total WAL file size 4742925, number of live WAL files 2.
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.898785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3009KB)], [141(10MB)]
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280898877, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 13796885, "oldest_snapshot_seqno": -1}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 12052 keys, 12161721 bytes, temperature: kUnknown
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280965485, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 12161721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093661, "index_size": 36843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 326789, "raw_average_key_size": 27, "raw_value_size": 11886395, "raw_average_value_size": 986, "num_data_blocks": 1377, "num_entries": 12052, "num_filter_entries": 12052, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.965961) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12161721 bytes
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.967371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.8 rd, 182.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 12569, records dropped: 517 output_compression: NoCompression
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.967410) EVENT_LOG_v1 {"time_micros": 1769093280967394, "job": 90, "event": "compaction_finished", "compaction_time_micros": 66731, "compaction_time_cpu_micros": 28881, "output_level": 6, "num_output_files": 1, "total_output_size": 12161721, "num_input_records": 12569, "num_output_records": 12052, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280968892, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280972856, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.898703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:01.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:01 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:01 compute-1 ceph-mon[81715]: pgmap v2409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:02 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:03.225 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:48:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:03.227 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:48:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:03 compute-1 ceph-mon[81715]: pgmap v2410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:03 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:03 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:04.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:04 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:05 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:05 compute-1 ceph-mon[81715]: pgmap v2411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:07 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:07 compute-1 ceph-mon[81715]: pgmap v2412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:08 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:08 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:08 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:09 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:09 compute-1 ceph-mon[81715]: pgmap v2413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:10 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:11 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:11 compute-1 ceph-mon[81715]: pgmap v2414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:11.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:12 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:13 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:13.230 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:48:13 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:13 compute-1 ceph-mon[81715]: pgmap v2415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:13 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:13.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:14 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:15 compute-1 podman[240342]: 2026-01-22 14:48:15.116434866 +0000 UTC m=+0.103951757 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 14:48:15 compute-1 ceph-mon[81715]: pgmap v2416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:15 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:16.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:16 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:17 compute-1 ceph-mon[81715]: pgmap v2417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:17 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:18.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:19 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:48:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:48:19 compute-1 ceph-mon[81715]: pgmap v2418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:20.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:20 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:20 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:21 compute-1 ceph-mon[81715]: pgmap v2419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:21 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:22 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:22 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:23 compute-1 ceph-mon[81715]: pgmap v2420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:24.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:24 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:24 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:24 compute-1 ceph-mon[81715]: pgmap v2421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:25.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:25 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:26.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:26 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:26 compute-1 ceph-mon[81715]: pgmap v2422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:27 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:27 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:28 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:28 compute-1 ceph-mon[81715]: pgmap v2423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:29 compute-1 podman[240368]: 2026-01-22 14:48:29.059397532 +0000 UTC m=+0.049329707 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 14:48:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:29 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:30.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:30 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:30 compute-1 ceph-mon[81715]: pgmap v2424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:31 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:32.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:32 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:32 compute-1 ceph-mon[81715]: pgmap v2425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:34.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:34 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:34 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:35 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:35 compute-1 ceph-mon[81715]: pgmap v2426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:36.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:36 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:37 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:37 compute-1 ceph-mon[81715]: pgmap v2427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:38.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:38 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:38 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:40.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:40 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-1 ceph-mon[81715]: pgmap v2428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:40 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:41 compute-1 ceph-mon[81715]: pgmap v2429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:41 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:42 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:42.251 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:48:42 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:42.252 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:48:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:43 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:43.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:44.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:44 compute-1 ceph-mon[81715]: pgmap v2430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:44 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:44 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:44 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:45 compute-1 ceph-mon[81715]: pgmap v2431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:45 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:46 compute-1 podman[240387]: 2026-01-22 14:48:46.126452576 +0000 UTC m=+0.113122446 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:48:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:46.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:46 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:48:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:48:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:48 compute-1 ceph-mon[81715]: pgmap v2432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:48 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:48.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:49 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:49 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:49 compute-1 ceph-mon[81715]: pgmap v2433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:49 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:50 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:51 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:48:51.254 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:48:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:51 compute-1 ceph-mon[81715]: pgmap v2434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:51 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:52 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:53 compute-1 ceph-mon[81715]: pgmap v2435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:53 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:53 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:54 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:55 compute-1 ceph-mon[81715]: pgmap v2436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:55 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:56 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:56 compute-1 ceph-mon[81715]: pgmap v2437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:57 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:58 compute-1 sudo[240415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:58 compute-1 sudo[240415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-1 sudo[240415]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-1 sudo[240440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:48:58 compute-1 sudo[240440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-1 sudo[240440]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-1 sudo[240465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:58 compute-1 sudo[240465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-1 sudo[240465]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-1 sudo[240490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:48:58 compute-1 sudo[240490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:59 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:59 compute-1 ceph-mon[81715]: pgmap v2438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:59 compute-1 sudo[240490]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:48:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:00 compute-1 podman[240545]: 2026-01-22 14:49:00.06569068 +0000 UTC m=+0.056095860 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:49:00 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:49:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:00 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:49:00 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:01 compute-1 ceph-mon[81715]: pgmap v2439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:01 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:49:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:49:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:03 compute-1 ceph-mon[81715]: pgmap v2440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:03 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:03 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:04.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:04 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:05 compute-1 ceph-mon[81715]: pgmap v2441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:05 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:06 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:07 compute-1 ceph-mon[81715]: pgmap v2442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:07 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:08 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:08 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:09 compute-1 ceph-mon[81715]: pgmap v2443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:09 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:09 compute-1 sudo[240565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:09 compute-1 sudo[240565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:09 compute-1 sudo[240565]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:09 compute-1 sudo[240590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:49:09 compute-1 sudo[240590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:09 compute-1 sudo[240590]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:10 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:11.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:11 compute-1 ceph-mon[81715]: pgmap v2444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:11 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:12 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:13 compute-1 ceph-mon[81715]: pgmap v2445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:13 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:13 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:14 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:14 compute-1 ceph-mon[81715]: pgmap v2446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:15.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:15 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:16 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:16 compute-1 ceph-mon[81715]: pgmap v2447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:16 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:17 compute-1 podman[240615]: 2026-01-22 14:49:17.129424392 +0000 UTC m=+0.111493382 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 14:49:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:17 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:18 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-1 ceph-mon[81715]: pgmap v2448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:18 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:19 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:20.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:21 compute-1 ceph-mon[81715]: pgmap v2449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:21 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:21.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:22 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:22.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:23 compute-1 ceph-mon[81715]: pgmap v2450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:23.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:24 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:24 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:24.414 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:49:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:24.415 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:49:25 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:25 compute-1 ceph-mon[81715]: pgmap v2451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:25.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:26 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:27 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:27 compute-1 ceph-mon[81715]: pgmap v2452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:27 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:28 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:28 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:29 compute-1 ceph-mon[81715]: pgmap v2453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:29 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:30 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:31 compute-1 podman[240642]: 2026-01-22 14:49:31.097704876 +0000 UTC m=+0.083283549 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:49:31 compute-1 ceph-mon[81715]: pgmap v2454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:31 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:32 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:32 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:32.417 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:49:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:33 compute-1 ceph-mon[81715]: pgmap v2455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:33 compute-1 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:33 compute-1 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:33.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:34 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:35 compute-1 ceph-mon[81715]: pgmap v2456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:35 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:35.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:36 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:37 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:37 compute-1 ceph-mon[81715]: pgmap v2457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:37.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.166177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378166709, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1529, "num_deletes": 258, "total_data_size": 2843266, "memory_usage": 2876080, "flush_reason": "Manual Compaction"}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378178691, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1857518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72127, "largest_seqno": 73651, "table_properties": {"data_size": 1851434, "index_size": 3094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15427, "raw_average_key_size": 20, "raw_value_size": 1838085, "raw_average_value_size": 2460, "num_data_blocks": 134, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093281, "oldest_key_time": 1769093281, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 12586 microseconds, and 5647 cpu microseconds.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.178774) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1857518 bytes OK
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.178825) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180475) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180496) EVENT_LOG_v1 {"time_micros": 1769093378180490, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2835929, prev total WAL file size 2844673, number of live WAL files 2.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181442) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1813KB)], [144(11MB)]
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378181516, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14019239, "oldest_snapshot_seqno": -1}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 12268 keys, 13865239 bytes, temperature: kUnknown
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378255500, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 13865239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13794154, "index_size": 39292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 332861, "raw_average_key_size": 27, "raw_value_size": 13581451, "raw_average_value_size": 1107, "num_data_blocks": 1477, "num_entries": 12268, "num_filter_entries": 12268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.255800) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 13865239 bytes
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.257252) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 187.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 12799, records dropped: 531 output_compression: NoCompression
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.257268) EVENT_LOG_v1 {"time_micros": 1769093378257260, "job": 92, "event": "compaction_finished", "compaction_time_micros": 74053, "compaction_time_cpu_micros": 43385, "output_level": 6, "num_output_files": 1, "total_output_size": 13865239, "num_input_records": 12799, "num_output_records": 12268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378257691, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378259563, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.260044) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378260129, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28880, "flush_reason": "Manual Compaction"}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378262247, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73653, "largest_seqno": 73907, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 2222 microseconds, and 848 cpu microseconds.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.262278) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 13847 bytes OK
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.262299) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263376) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263388) EVENT_LOG_v1 {"time_micros": 1769093378263384, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(13KB)], [147(13MB)]
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378263737, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 13879086, "oldest_snapshot_seqno": -1}
Jan 22 14:49:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 12019 keys, 10006926 bytes, temperature: kUnknown
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378313149, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 10006926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9942432, "index_size": 33341, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 327879, "raw_average_key_size": 27, "raw_value_size": 9738944, "raw_average_value_size": 810, "num_data_blocks": 1228, "num_entries": 12019, "num_filter_entries": 12019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.313423) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10006926 bytes
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.3 rd, 202.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(1725.0) write-amplify(722.7) OK, records in: 12523, records dropped: 504 output_compression: NoCompression
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314561) EVENT_LOG_v1 {"time_micros": 1769093378314549, "job": 94, "event": "compaction_finished", "compaction_time_micros": 49511, "compaction_time_cpu_micros": 24823, "output_level": 6, "num_output_files": 1, "total_output_size": 10006926, "num_input_records": 12523, "num_output_records": 12019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378314726, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378317643, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:38 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:39 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:39 compute-1 ceph-mon[81715]: pgmap v2458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:40 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:41 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:41 compute-1 ceph-mon[81715]: pgmap v2459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:42.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:42 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:42 compute-1 ceph-mon[81715]: pgmap v2460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:43.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:43 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:43 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:44 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:44 compute-1 ceph-mon[81715]: pgmap v2461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:45 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:46 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:46 compute-1 ceph-mon[81715]: pgmap v2462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:49:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:49:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:47.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:47 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:48 compute-1 podman[240663]: 2026-01-22 14:49:48.216680594 +0000 UTC m=+0.196229286 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:49:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:48 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:48 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:48 compute-1 ceph-mon[81715]: pgmap v2463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:49.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:49 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:49 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:50 compute-1 ceph-mon[81715]: pgmap v2464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:50 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:51.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:51 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:52 compute-1 ceph-mon[81715]: pgmap v2465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:52 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:53 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:53 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:54.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:54 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:54 compute-1 ceph-mon[81715]: pgmap v2466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:55 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:56.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:56 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:56 compute-1 ceph-mon[81715]: pgmap v2467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:58 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:59 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:59 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:59 compute-1 ceph-mon[81715]: pgmap v2468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:49:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:00 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 14:50:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 14:50:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:01 compute-1 ceph-mon[81715]: pgmap v2469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:01 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:02 compute-1 podman[240690]: 2026-01-22 14:50:02.06249134 +0000 UTC m=+0.053928506 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:50:02 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:03 compute-1 ceph-mon[81715]: pgmap v2470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:03 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:04 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:04 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:04.670 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:50:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:04.671 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:50:05 compute-1 ceph-mon[81715]: pgmap v2471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:05 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:05.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:06 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:07 compute-1 ceph-mon[81715]: pgmap v2472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:07 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:08 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:08 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:08.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:08 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:08.673 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:50:09 compute-1 ceph-mon[81715]: pgmap v2473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:09 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:09.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:10 compute-1 sudo[240709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:10 compute-1 sudo[240709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-1 sudo[240709]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:10 compute-1 sudo[240734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:50:10 compute-1 sudo[240734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-1 sudo[240734]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:10.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:10 compute-1 sudo[240759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:10 compute-1 sudo[240759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-1 sudo[240759]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-1 sudo[240784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:50:10 compute-1 sudo[240784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-1 sudo[240784]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:11 compute-1 ceph-mon[81715]: pgmap v2474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:11 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:50:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:50:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:12 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:13 compute-1 ceph-mon[81715]: pgmap v2475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:13 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:13 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:14 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:15 compute-1 ceph-mon[81715]: pgmap v2476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:15 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:16 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:17 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:17 compute-1 ceph-mon[81715]: pgmap v2477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:18 compute-1 sudo[240840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:18 compute-1 sudo[240840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:18 compute-1 sudo[240840]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:18 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:18 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:18 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:18 compute-1 sudo[240871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:50:18 compute-1 sudo[240871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:18 compute-1 sudo[240871]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:18 compute-1 podman[240864]: 2026-01-22 14:50:18.502079154 +0000 UTC m=+0.094466130 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:50:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:50:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:50:19 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:19 compute-1 ceph-mon[81715]: pgmap v2478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:20 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:20 compute-1 ceph-mon[81715]: pgmap v2479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:21 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:22 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:22 compute-1 ceph-mon[81715]: pgmap v2480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:23 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:23 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:24 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:24 compute-1 ceph-mon[81715]: pgmap v2481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:25 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:25 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:26 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:26 compute-1 ceph-mon[81715]: pgmap v2482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:27.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:27 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:28 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:28 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:28 compute-1 ceph-mon[81715]: pgmap v2483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:29 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:30 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:30 compute-1 ceph-mon[81715]: pgmap v2484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:31.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:31 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:33 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:33 compute-1 ceph-mon[81715]: pgmap v2485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:33 compute-1 podman[240918]: 2026-01-22 14:50:33.122563298 +0000 UTC m=+0.104032529 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:50:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:34 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:34 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:35 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:35 compute-1 ceph-mon[81715]: pgmap v2486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:36 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:37 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:37 compute-1 ceph-mon[81715]: pgmap v2487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:50:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:38 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:39 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:39 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:39 compute-1 ceph-mon[81715]: pgmap v2488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 14:50:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:40 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:41 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:41 compute-1 ceph-mon[81715]: pgmap v2489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 14:50:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:42 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:43 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:43 compute-1 ceph-mon[81715]: pgmap v2490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 14:50:43 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:44 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:45 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:45 compute-1 ceph-mon[81715]: pgmap v2491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 39 op/s
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.331352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445331403, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 251, "total_data_size": 1915672, "memory_usage": 1936992, "flush_reason": "Manual Compaction"}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445340077, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 1258029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73912, "largest_seqno": 75060, "table_properties": {"data_size": 1253303, "index_size": 2121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12396, "raw_average_key_size": 20, "raw_value_size": 1242935, "raw_average_value_size": 2068, "num_data_blocks": 92, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8751 microseconds, and 3721 cpu microseconds.
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.340111) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 1258029 bytes OK
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.340128) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341535) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341550) EVENT_LOG_v1 {"time_micros": 1769093445341545, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1909918, prev total WAL file size 1909918, number of live WAL files 2.
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.342186) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1228KB)], [150(9772KB)]
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445342230, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 11264955, "oldest_snapshot_seqno": -1}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 12105 keys, 9648861 bytes, temperature: kUnknown
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445399518, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 9648861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584277, "index_size": 33239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 330813, "raw_average_key_size": 27, "raw_value_size": 9379677, "raw_average_value_size": 774, "num_data_blocks": 1219, "num_entries": 12105, "num_filter_entries": 12105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.399839) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9648861 bytes
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.401014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.3 rd, 168.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 12620, records dropped: 515 output_compression: NoCompression
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.401036) EVENT_LOG_v1 {"time_micros": 1769093445401025, "job": 96, "event": "compaction_finished", "compaction_time_micros": 57373, "compaction_time_cpu_micros": 30946, "output_level": 6, "num_output_files": 1, "total_output_size": 9648861, "num_input_records": 12620, "num_output_records": 12105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445401488, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445403748, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.342125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:45.449 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:50:45 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:45.450 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:50:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:46 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:47 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:47 compute-1 ceph-mon[81715]: pgmap v2492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 22 14:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:50:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:50:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:47.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:48 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:48 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:48.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:49 compute-1 podman[240937]: 2026-01-22 14:50:49.130691407 +0000 UTC m=+0.120375389 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:50:49 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:49 compute-1 ceph-mon[81715]: pgmap v2493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 14:50:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:50 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:51 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:50:51.454 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:50:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:51 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:51 compute-1 ceph-mon[81715]: pgmap v2494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 22 14:50:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:52 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:53 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:53 compute-1 ceph-mon[81715]: pgmap v2495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 14:50:53 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:54 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:55.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:55 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:55 compute-1 ceph-mon[81715]: pgmap v2496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 14:50:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:56 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:56 compute-1 ceph-mon[81715]: pgmap v2497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 43 KiB/s wr, 3 op/s
Jan 22 14:50:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:50:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:50:57 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:58 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:50:58 compute-1 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:58 compute-1 ceph-mon[81715]: pgmap v2498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:50:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:59 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:00 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:00 compute-1 ceph-mon[81715]: pgmap v2499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:01 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:02 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:02 compute-1 ceph-mon[81715]: pgmap v2500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:03 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:03 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:04 compute-1 podman[240963]: 2026-01-22 14:51:04.069475702 +0000 UTC m=+0.053139265 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 14:51:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:04 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:04 compute-1 ceph-mon[81715]: pgmap v2501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:05 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:05 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:06 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:06 compute-1 ceph-mon[81715]: pgmap v2502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:07 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:08 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:08 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:08 compute-1 ceph-mon[81715]: pgmap v2503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:09 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:10 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:10 compute-1 ceph-mon[81715]: pgmap v2504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:51:12 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:13 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:13 compute-1 ceph-mon[81715]: pgmap v2505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:14 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:14 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:15 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:15 compute-1 ceph-mon[81715]: pgmap v2506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:16 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:17 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:17 compute-1 ceph-mon[81715]: pgmap v2507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:18 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:18 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:18 compute-1 sudo[240982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:18 compute-1 sudo[240982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-1 sudo[240982]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-1 sudo[241007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:51:18 compute-1 sudo[241007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-1 sudo[241007]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-1 sudo[241032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:18 compute-1 sudo[241032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-1 sudo[241032]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-1 sudo[241057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:51:18 compute-1 sudo[241057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:19 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:19 compute-1 ceph-mon[81715]: pgmap v2508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:19 compute-1 sudo[241057]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:20 compute-1 podman[241112]: 2026-01-22 14:51:20.113293074 +0000 UTC m=+0.104190282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:51:20 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:51:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:51:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:20.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:21 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:21 compute-1 ceph-mon[81715]: pgmap v2509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:21.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:22 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:23 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:23 compute-1 ceph-mon[81715]: pgmap v2510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:23 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:24 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:24.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:25 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:25 compute-1 ceph-mon[81715]: pgmap v2511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:25.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:26 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:26 compute-1 sudo[241137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:26 compute-1 sudo[241137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:26 compute-1 sudo[241137]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:26 compute-1 sudo[241162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:51:26 compute-1 sudo[241162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:26 compute-1 sudo[241162]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:27.279 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:51:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:27.280 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:51:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:27.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:27 compute-1 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:27 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:27 compute-1 ceph-mon[81715]: pgmap v2512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:28.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:28 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:28 compute-1 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:28 compute-1 ceph-mon[81715]: pgmap v2513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:29 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:29.282 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:51:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:29.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:29 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:30.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:30 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-1 ceph-mon[81715]: pgmap v2514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:31.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:31 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:32 compute-1 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:51:32 compute-1 ceph-mon[81715]: pgmap v2515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:33.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:33 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:33 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:33 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:34.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:34 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:34 compute-1 ceph-mon[81715]: pgmap v2516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 2 op/s
Jan 22 14:51:35 compute-1 podman[241187]: 2026-01-22 14:51:35.051712628 +0000 UTC m=+0.046894485 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:51:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:35.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:36.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:36 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-1 ceph-mon[81715]: pgmap v2517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 22 14:51:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:38 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:38.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 22 14:51:39 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:39 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:39 compute-1 ceph-mon[81715]: pgmap v2518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 714 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Jan 22 14:51:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:40 compute-1 ceph-mon[81715]: osdmap e162: 3 total, 3 up, 3 in
Jan 22 14:51:40 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:41 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:41 compute-1 ceph-mon[81715]: pgmap v2520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 14:51:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:42 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:42.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:43 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:43 compute-1 ceph-mon[81715]: pgmap v2521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 14:51:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:44 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:44 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:44.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:45 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:45 compute-1 ceph-mon[81715]: pgmap v2522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 65 op/s
Jan 22 14:51:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:46 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:51:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:51:47 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:47.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:48 compute-1 ceph-mon[81715]: pgmap v2523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 22 14:51:48 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:48 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:49 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:50 compute-1 ceph-mon[81715]: pgmap v2524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 409 B/s wr, 18 op/s
Jan 22 14:51:50 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:51 compute-1 podman[241206]: 2026-01-22 14:51:51.078187785 +0000 UTC m=+0.069620060 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 14:51:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:52 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:52 compute-1 ceph-mon[81715]: pgmap v2525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 351 B/s wr, 15 op/s
Jan 22 14:51:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:53 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:53 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:53.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:54 compute-1 ceph-mon[81715]: pgmap v2526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 14:51:54 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:54 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:55 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:56 compute-1 ceph-mon[81715]: pgmap v2527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 14:51:56 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:57 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:59 compute-1 ceph-mon[81715]: pgmap v2528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:59 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:59 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:59 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:51:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:00 compute-1 ceph-mon[81715]: pgmap v2529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:00 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:01.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:02 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:02 compute-1 ceph-mon[81715]: pgmap v2530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:02.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:03 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:03 compute-1 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:04.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:04 compute-1 ceph-mon[81715]: pgmap v2531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:04 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:04 compute-1 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:05.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:06 compute-1 podman[241234]: 2026-01-22 14:52:06.076638294 +0000 UTC m=+0.061663475 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:52:06 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:06 compute-1 ceph-mon[81715]: pgmap v2532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:06.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:07 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:07 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:08.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:08 compute-1 ceph-mon[81715]: pgmap v2533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:08 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:09 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:10 compute-1 ceph-mon[81715]: pgmap v2534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:10 compute-1 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:11.063 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:52:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:11.064 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:52:11 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:12 compute-1 ceph-mon[81715]: pgmap v2535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:12 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:12 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 14:52:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 14:52:14 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:14 compute-1 ceph-mon[81715]: pgmap v2536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:14 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:15 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:16 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:16.066 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:52:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:16 compute-1 ceph-mon[81715]: pgmap v2537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:16 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:17.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:18 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:18 compute-1 ceph-mon[81715]: pgmap v2538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:18 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:18.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:19 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:19 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:52:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:52:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:19.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:20 compute-1 ceph-mon[81715]: pgmap v2539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:20 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:21.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:21 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-1 ceph-mon[81715]: pgmap v2540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:21 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:22 compute-1 podman[241254]: 2026-01-22 14:52:22.139458869 +0000 UTC m=+0.126011761 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 22 14:52:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:22.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:23 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:23.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:24.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:24 compute-1 ceph-mon[81715]: pgmap v2541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:24 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:24 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:26 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:26.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:26 compute-1 sudo[241280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:26 compute-1 sudo[241280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:26 compute-1 sudo[241280]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-1 sudo[241305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:52:27 compute-1 sudo[241305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-1 sudo[241305]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-1 sudo[241330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:27 compute-1 sudo[241330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-1 sudo[241330]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-1 sudo[241355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:52:27 compute-1 sudo[241355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-1 ceph-mon[81715]: pgmap v2542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:27 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:27 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:27 compute-1 sudo[241355]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:28.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:28 compute-1 ceph-mon[81715]: pgmap v2543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:28 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:52:28 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:29 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:30.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:30 compute-1 ceph-mon[81715]: pgmap v2544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:30 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:32 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:32 compute-1 ceph-mon[81715]: pgmap v2545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:32 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:33 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:34.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:34 compute-1 ceph-mon[81715]: pgmap v2546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:34 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:34 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:35 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:36.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:37 compute-1 ceph-mon[81715]: pgmap v2547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:37 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:37 compute-1 podman[241410]: 2026-01-22 14:52:37.048330315 +0000 UTC m=+0.045214111 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:52:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:37.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:38 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:38 compute-1 ceph-mon[81715]: pgmap v2548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:38 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:38 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:38 compute-1 sudo[241429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:38 compute-1 sudo[241429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:38 compute-1 sudo[241429]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:38 compute-1 sudo[241454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:52:38 compute-1 sudo[241454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:38 compute-1 sudo[241454]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.463235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558463327, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 250, "total_data_size": 3210495, "memory_usage": 3277872, "flush_reason": "Manual Compaction"}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558481960, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2099552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75065, "largest_seqno": 76759, "table_properties": {"data_size": 2092912, "index_size": 3521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16257, "raw_average_key_size": 19, "raw_value_size": 2078219, "raw_average_value_size": 2546, "num_data_blocks": 154, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093446, "oldest_key_time": 1769093446, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 18749 microseconds, and 8858 cpu microseconds.
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.482003) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2099552 bytes OK
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.482023) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483713) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483727) EVENT_LOG_v1 {"time_micros": 1769093558483722, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483745) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3202444, prev total WAL file size 3202444, number of live WAL files 2.
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.484565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2050KB)], [153(9422KB)]
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558484685, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 11748413, "oldest_snapshot_seqno": -1}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 12404 keys, 10657385 bytes, temperature: kUnknown
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558535372, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 10657385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10590391, "index_size": 34881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 339539, "raw_average_key_size": 27, "raw_value_size": 10379673, "raw_average_value_size": 836, "num_data_blocks": 1270, "num_entries": 12404, "num_filter_entries": 12404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.535893) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10657385 bytes
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.537182) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.4 rd, 210.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.7) write-amplify(5.1) OK, records in: 12921, records dropped: 517 output_compression: NoCompression
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.537207) EVENT_LOG_v1 {"time_micros": 1769093558537196, "job": 98, "event": "compaction_finished", "compaction_time_micros": 50761, "compaction_time_cpu_micros": 27007, "output_level": 6, "num_output_files": 1, "total_output_size": 10657385, "num_input_records": 12921, "num_output_records": 12404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558538061, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558539689, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.484469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:39 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:39 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:41 compute-1 ceph-mon[81715]: pgmap v2549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:41 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:42 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:42 compute-1 ceph-mon[81715]: pgmap v2550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:42 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:43 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:44 compute-1 ceph-mon[81715]: pgmap v2551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:44 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:44 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:45 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:46 compute-1 ceph-mon[81715]: pgmap v2552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:46 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:47 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:52:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:52:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:48 compute-1 ceph-mon[81715]: pgmap v2553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:48 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:49 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:49 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:50 compute-1 ceph-mon[81715]: pgmap v2554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:50 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:51 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:52 compute-1 ceph-mon[81715]: pgmap v2555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:52 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:52.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:53 compute-1 podman[241479]: 2026-01-22 14:52:53.104369109 +0000 UTC m=+0.096016653 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:52:53 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:53.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:53 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:53.974 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:52:53 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:53.976 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:52:54 compute-1 ceph-mon[81715]: pgmap v2556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:54 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:54 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:54.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:55 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:55.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:56 compute-1 ceph-mon[81715]: pgmap v2557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:56 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:57 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:58 compute-1 ceph-mon[81715]: pgmap v2558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:58 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:58.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:59 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:59 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:52:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:59.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:59 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:52:59.978 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:53:00 compute-1 ceph-mon[81715]: pgmap v2559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:00 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:01 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:02 compute-1 ceph-mon[81715]: pgmap v2560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:02 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:02.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:03 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:04 compute-1 ceph-mon[81715]: pgmap v2561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:04 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:04 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:05 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:06 compute-1 ceph-mon[81715]: pgmap v2562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:06 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:06.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:07 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:07.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:07 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:08 compute-1 podman[241505]: 2026-01-22 14:53:08.060177627 +0000 UTC m=+0.050211206 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 14:53:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:08 compute-1 ceph-mon[81715]: pgmap v2563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:08 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:08 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:09 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:09 compute-1 ceph-mon[81715]: pgmap v2564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:09 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:10.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:11 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:12 compute-1 ceph-mon[81715]: pgmap v2565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:12 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:12 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:13 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:14 compute-1 ceph-mon[81715]: pgmap v2566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:14 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:14 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:14 compute-1 sshd-session[241525]: Connection closed by 195.177.94.68 port 35604
Jan 22 14:53:15 compute-1 sshd-session[241526]: Connection closed by authenticating user root 195.177.94.68 port 35620 [preauth]
Jan 22 14:53:15 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:16 compute-1 ceph-mon[81715]: pgmap v2567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:16 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:17.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:17 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:18 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:18.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:19 compute-1 ceph-mon[81715]: pgmap v2568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:19 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:19 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:53:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:53:19 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:20 compute-1 ceph-mon[81715]: pgmap v2569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:20 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:21 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:22 compute-1 ceph-mon[81715]: pgmap v2570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:22 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:22 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:23 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:23 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:24 compute-1 podman[241528]: 2026-01-22 14:53:24.120242069 +0000 UTC m=+0.101781447 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 14:53:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:24 compute-1 ceph-mon[81715]: pgmap v2571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:24 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:25 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:26 compute-1 ceph-mon[81715]: pgmap v2572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:26 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:27 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:27 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:28.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:28 compute-1 ceph-mon[81715]: pgmap v2573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:28 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:28 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:29 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:29 compute-1 ceph-mon[81715]: pgmap v2574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:30 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:30 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:31 compute-1 ceph-mon[81715]: pgmap v2575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:31 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:32.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:33 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:33.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:34 compute-1 ceph-mon[81715]: pgmap v2576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:34 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:34 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:35 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:35.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:36 compute-1 ceph-mon[81715]: pgmap v2577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:36 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:37 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:38 compute-1 ceph-mon[81715]: pgmap v2578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:38 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.314705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618314745, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1001, "num_deletes": 251, "total_data_size": 1620554, "memory_usage": 1650824, "flush_reason": "Manual Compaction"}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618324534, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1064004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76764, "largest_seqno": 77760, "table_properties": {"data_size": 1059795, "index_size": 1732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10912, "raw_average_key_size": 20, "raw_value_size": 1050737, "raw_average_value_size": 1953, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093559, "oldest_key_time": 1769093559, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 9896 microseconds, and 5381 cpu microseconds.
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.324597) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1064004 bytes OK
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.324629) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326589) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326610) EVENT_LOG_v1 {"time_micros": 1769093618326603, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1615456, prev total WAL file size 1615456, number of live WAL files 2.
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.328079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1039KB)], [156(10MB)]
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618328111, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 11721389, "oldest_snapshot_seqno": -1}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 12431 keys, 10129817 bytes, temperature: kUnknown
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618384534, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10129817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10063191, "index_size": 34449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 341173, "raw_average_key_size": 27, "raw_value_size": 9852380, "raw_average_value_size": 792, "num_data_blocks": 1246, "num_entries": 12431, "num_filter_entries": 12431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.384799) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129817 bytes
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.4 rd, 179.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(20.5) write-amplify(9.5) OK, records in: 12942, records dropped: 511 output_compression: NoCompression
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386102) EVENT_LOG_v1 {"time_micros": 1769093618386094, "job": 100, "event": "compaction_finished", "compaction_time_micros": 56506, "compaction_time_cpu_micros": 32693, "output_level": 6, "num_output_files": 1, "total_output_size": 10129817, "num_input_records": 12942, "num_output_records": 12431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618386589, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618388496, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-1 sudo[241555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:38 compute-1 sudo[241555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-1 sudo[241555]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-1 sudo[241586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:53:38 compute-1 podman[241579]: 2026-01-22 14:53:38.501753135 +0000 UTC m=+0.051282275 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:53:38 compute-1 sudo[241586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-1 sudo[241586]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-1 sudo[241625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:38 compute-1 sudo[241625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-1 sudo[241625]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-1 sudo[241650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:53:38 compute-1 sudo[241650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:39 compute-1 sudo[241650]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:39 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:39 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:53:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:53:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:40 compute-1 ceph-mon[81715]: pgmap v2579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:40 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:41 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:42 compute-1 ceph-mon[81715]: pgmap v2580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:42 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:44 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:44 compute-1 ceph-mon[81715]: pgmap v2581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:44 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:44 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:45 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:45.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:46 compute-1 ceph-mon[81715]: pgmap v2582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:46 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:46.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:46 compute-1 sudo[241706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:46 compute-1 sudo[241706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:46 compute-1 sudo[241706]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:46 compute-1 sudo[241731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:53:46 compute-1 sudo[241731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:46 compute-1 sudo[241731]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:53:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:53:47 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:48 compute-1 ceph-mon[81715]: pgmap v2583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:48 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:48 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:49 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:50.104 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:53:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:50.106 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:53:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:50 compute-1 ceph-mon[81715]: pgmap v2584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:50 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:52 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:52 compute-1 ceph-mon[81715]: pgmap v2585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:52 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:53 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:54 compute-1 ceph-mon[81715]: pgmap v2586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:54 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:54 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:55 compute-1 podman[241756]: 2026-01-22 14:53:55.118897451 +0000 UTC m=+0.110485383 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:53:55 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:56 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:53:56.107 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:53:56 compute-1 ceph-mon[81715]: pgmap v2587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:56 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:57 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:57 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:58.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:59 compute-1 ceph-mon[81715]: pgmap v2588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:59 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:59 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:53:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:59.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:00 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:00 compute-1 ceph-mon[81715]: pgmap v2589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:00 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:01 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:03 compute-1 ceph-mon[81715]: pgmap v2590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:03 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:04 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:04 compute-1 ceph-mon[81715]: pgmap v2591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:04 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:04 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:05 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:06 compute-1 ceph-mon[81715]: pgmap v2592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:06 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:07 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:09 compute-1 podman[241783]: 2026-01-22 14:54:09.069120495 +0000 UTC m=+0.054843280 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:54:09 compute-1 ceph-mon[81715]: pgmap v2593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:09 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:09 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:10 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:10 compute-1 ceph-mon[81715]: pgmap v2594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:10 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:10.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:11 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:11.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:12 compute-1 ceph-mon[81715]: pgmap v2595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:12 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.560883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653560924, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 694, "num_deletes": 256, "total_data_size": 999112, "memory_usage": 1012248, "flush_reason": "Manual Compaction"}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653567614, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 656855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77765, "largest_seqno": 78454, "table_properties": {"data_size": 653574, "index_size": 1124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8446, "raw_average_key_size": 19, "raw_value_size": 646531, "raw_average_value_size": 1493, "num_data_blocks": 48, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093619, "oldest_key_time": 1769093619, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 6818 microseconds, and 3101 cpu microseconds.
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.567699) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 656855 bytes OK
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.567725) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569164) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569189) EVENT_LOG_v1 {"time_micros": 1769093653569181, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 995234, prev total WAL file size 995234, number of live WAL files 2.
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(641KB)], [159(9892KB)]
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653570258, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 10786672, "oldest_snapshot_seqno": -1}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 12340 keys, 10643294 bytes, temperature: kUnknown
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653648870, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 10643294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10576425, "index_size": 34884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30917, "raw_key_size": 340474, "raw_average_key_size": 27, "raw_value_size": 10366369, "raw_average_value_size": 840, "num_data_blocks": 1261, "num_entries": 12340, "num_filter_entries": 12340, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.649281) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10643294 bytes
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.651067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(32.6) write-amplify(16.2) OK, records in: 12864, records dropped: 524 output_compression: NoCompression
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.651088) EVENT_LOG_v1 {"time_micros": 1769093653651078, "job": 102, "event": "compaction_finished", "compaction_time_micros": 78755, "compaction_time_cpu_micros": 31911, "output_level": 6, "num_output_files": 1, "total_output_size": 10643294, "num_input_records": 12864, "num_output_records": 12340, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653651316, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653652854, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:13 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:14 compute-1 ceph-mon[81715]: pgmap v2596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:14 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:15 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:15 compute-1 ceph-mon[81715]: pgmap v2597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:17 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:17 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:54:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:54:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:54:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:54:18 compute-1 ceph-mon[81715]: pgmap v2598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:18 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:19 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:54:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:54:19 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:21 compute-1 ceph-mon[81715]: pgmap v2599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:21 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:22 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:22 compute-1 ceph-mon[81715]: pgmap v2600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:22 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:23 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:23.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:24 compute-1 ceph-mon[81715]: pgmap v2601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:24 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:24 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:24.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:25 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:26 compute-1 podman[241802]: 2026-01-22 14:54:26.105613588 +0000 UTC m=+0.091491531 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:54:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:27 compute-1 ceph-mon[81715]: pgmap v2602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:27 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:28 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:28 compute-1 ceph-mon[81715]: pgmap v2603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:28 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:28.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:29 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:29 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:30 compute-1 ceph-mon[81715]: pgmap v2604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:30 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:31 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:32 compute-1 ceph-mon[81715]: pgmap v2605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:32 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:33 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:34 compute-1 ceph-mon[81715]: pgmap v2606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:34 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:34 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:54:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Cumulative writes: 13K writes, 42K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4199 syncs, 3.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1191 writes, 2052 keys, 1191 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 1191 writes, 562 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:54:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:35 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:35.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:36 compute-1 ceph-mon[81715]: pgmap v2607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:36 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:37 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:37.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:38 compute-1 ceph-mon[81715]: pgmap v2608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:54:38 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:38 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:39 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:40 compute-1 podman[241828]: 2026-01-22 14:54:40.071800216 +0000 UTC m=+0.060825751 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 14:54:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:40 compute-1 ceph-mon[81715]: pgmap v2609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 14:54:40 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:42 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:42 compute-1 ceph-mon[81715]: pgmap v2610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 14:54:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:43 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:43 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:44 compute-1 ceph-mon[81715]: pgmap v2611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 14:54:44 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:44 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:45 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:46 compute-1 ceph-mon[81715]: pgmap v2612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 691 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 394 KiB/s wr, 22 op/s
Jan 22 14:54:46 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:47 compute-1 sudo[241847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:47 compute-1 sudo[241847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-1 sudo[241847]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-1 sudo[241872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:54:47 compute-1 sudo[241872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-1 sudo[241872]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-1 sudo[241897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:47 compute-1 sudo[241897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-1 sudo[241897]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-1 sudo[241922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:54:47 compute-1 sudo[241922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:54:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.493 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:54:47 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:47 compute-1 sudo[241922]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:47.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:48 compute-1 ceph-mon[81715]: pgmap v2613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 22 14:54:48 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:48 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:49 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:49 compute-1 ceph-mon[81715]: pgmap v2614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 701 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Jan 22 14:54:49 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:49 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:49.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:50 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:54:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:54:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:54:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:54:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:53 compute-1 ceph-mon[81715]: pgmap v2615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 22 14:54:53 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:54 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:54 compute-1 ceph-mon[81715]: pgmap v2616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 14:54:54 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:54 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:54.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:55 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:56 compute-1 ceph-mon[81715]: pgmap v2617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 14:54:56 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:56.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:57 compute-1 podman[241978]: 2026-01-22 14:54:57.097200618 +0000 UTC m=+0.084966375 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:54:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:54:57.185 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:54:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:54:57.186 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:54:57 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:57 compute-1 sudo[242004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:57 compute-1 sudo[242004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:57 compute-1 sudo[242004]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:57 compute-1 sudo[242029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:54:57 compute-1 sudo[242029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:57 compute-1 sudo[242029]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:54:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:54:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:58 compute-1 ceph-mon[81715]: pgmap v2618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:54:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:58 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:59 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:59 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:54:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:01 compute-1 ceph-mon[81715]: pgmap v2619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:01 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:01 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:02 compute-1 ceph-mon[81715]: pgmap v2620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:02 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:02.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:03 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:04.188 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:55:04 compute-1 ceph-mon[81715]: pgmap v2621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:04 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:04 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:05 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:06 compute-1 ceph-mon[81715]: pgmap v2622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:06 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:07 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:08 compute-1 ceph-mon[81715]: pgmap v2623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:08 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:08 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:09 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:10 compute-1 ceph-mon[81715]: pgmap v2624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:10 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:11 compute-1 podman[242055]: 2026-01-22 14:55:11.113152327 +0000 UTC m=+0.096040643 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:55:11 compute-1 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:12 compute-1 ceph-mon[81715]: pgmap v2625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:12 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:13 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:13 compute-1 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:14 compute-1 ceph-mon[81715]: pgmap v2626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:14 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:15 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:16 compute-1 ceph-mon[81715]: pgmap v2627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:16 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:17 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:18 compute-1 ceph-mon[81715]: pgmap v2628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:18 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:55:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:55:18 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:19 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:20 compute-1 ceph-mon[81715]: pgmap v2629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:20 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:21 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:21 compute-1 ceph-mon[81715]: pgmap v2630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:22 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:23 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:23 compute-1 ceph-mon[81715]: pgmap v2631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:23 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:24 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:25 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:25 compute-1 ceph-mon[81715]: pgmap v2632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:26 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:26 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:28 compute-1 podman[242075]: 2026-01-22 14:55:28.088464347 +0000 UTC m=+0.072357914 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 22 14:55:28 compute-1 ceph-mon[81715]: pgmap v2633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:28 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:29 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:29 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:30 compute-1 ceph-mon[81715]: pgmap v2634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:30 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:30.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:31 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:55:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 79K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1861 writes, 9620 keys, 1861 commit groups, 1.0 writes per commit group, ingest: 16.43 MB, 0.03 MB/s
                                           Interval WAL: 1861 writes, 1861 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     67.5      1.27              0.28        51    0.025       0      0       0.0       0.0
                                             L6      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.5    140.7    121.2      3.88              1.34        50    0.078    454K    26K       0.0       0.0
                                            Sum      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.5    106.0    108.0      5.15              1.62       101    0.051    454K    26K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    163.2    163.1      0.51              0.26        14    0.036     89K   3619       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    140.7    121.2      3.88              1.34        50    0.078    454K    26K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     67.6      1.27              0.28        50    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.084, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.54 GB write, 0.12 MB/s write, 0.53 GB read, 0.11 MB/s read, 5.2 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 58.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000275 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3064,55.17 MB,18.1467%) FilterBlock(101,1.23 MB,0.403088%) IndexBlock(101,1.63 MB,0.536402%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:55:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:32 compute-1 ceph-mon[81715]: pgmap v2635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:32 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:33 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:34 compute-1 ceph-mon[81715]: pgmap v2636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:34 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:34 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:34.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:35 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:37 compute-1 ceph-mon[81715]: pgmap v2637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:37 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:38 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-1 ceph-mon[81715]: pgmap v2638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:38 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:39 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:39 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:40.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:40 compute-1 ceph-mon[81715]: pgmap v2639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:40 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:41 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:41 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.939 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:55:41 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.940 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:55:41 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.941 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:55:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:42 compute-1 podman[242101]: 2026-01-22 14:55:42.085463013 +0000 UTC m=+0.075401186 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:55:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:42 compute-1 ceph-mon[81715]: pgmap v2640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:55:42 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:44 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:44 compute-1 ceph-mon[81715]: pgmap v2641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:55:44 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:44 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:45 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:46 compute-1 ceph-mon[81715]: pgmap v2642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:46 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:47 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.493 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.494 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:55:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.494 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:55:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:48.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:48 compute-1 ceph-mon[81715]: pgmap v2643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:48 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:49 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:49 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:50 compute-1 ceph-mon[81715]: pgmap v2644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:50 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:51 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:52.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:52 compute-1 ceph-mon[81715]: pgmap v2645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:52 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:53 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:54 compute-1 ceph-mon[81715]: pgmap v2646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:55:54 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:55:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:55:54 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:55 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:56 compute-1 ceph-mon[81715]: pgmap v2647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 340 B/s wr, 1 op/s
Jan 22 14:55:56 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:57 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:57 compute-1 sudo[242122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:57 compute-1 sudo[242122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:57 compute-1 sudo[242122]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-1 sudo[242147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:55:58 compute-1 sudo[242147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-1 sudo[242147]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:58.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:58 compute-1 sudo[242172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:58 compute-1 sudo[242172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-1 sudo[242172]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-1 sudo[242198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:55:58 compute-1 sudo[242198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:58 compute-1 podman[242196]: 2026-01-22 14:55:58.318861413 +0000 UTC m=+0.143917535 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:55:58 compute-1 ceph-mon[81715]: pgmap v2648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:55:58 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:58 compute-1 podman[242322]: 2026-01-22 14:55:58.721944981 +0000 UTC m=+0.065704385 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 22 14:55:58 compute-1 podman[242322]: 2026-01-22 14:55:58.813210673 +0000 UTC m=+0.156970037 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:55:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:55:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:59 compute-1 sudo[242198]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:59 compute-1 sudo[242445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:59 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:59 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:55:59 compute-1 sudo[242445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:59 compute-1 sudo[242445]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:59 compute-1 sudo[242470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:55:59 compute-1 sudo[242470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:59 compute-1 sudo[242470]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:59 compute-1 sudo[242495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:59 compute-1 sudo[242495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:59 compute-1 sudo[242495]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:59 compute-1 sudo[242520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:55:59 compute-1 sudo[242520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:00.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:00 compute-1 sudo[242520]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:01 compute-1 ceph-mon[81715]: pgmap v2649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:01 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:56:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:56:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:02 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:02 compute-1 ceph-mon[81715]: pgmap v2650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:56:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:56:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:56:02 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:03 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:04 compute-1 ceph-mon[81715]: pgmap v2651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:04 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:04 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:05 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:07 compute-1 ceph-mon[81715]: pgmap v2652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:07 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:07 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:07 compute-1 ceph-mon[81715]: pgmap v2653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 11 op/s
Jan 22 14:56:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:08 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:08 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:08 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:10 compute-1 ceph-mon[81715]: pgmap v2654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:10 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:10 compute-1 sudo[242576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:10 compute-1 sudo[242576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:10 compute-1 sudo[242576]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:10 compute-1 sudo[242601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:56:10 compute-1 sudo[242601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:10 compute-1 sudo[242601]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:11 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:12 compute-1 ceph-mon[81715]: pgmap v2655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:12 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:13 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:13 compute-1 podman[242626]: 2026-01-22 14:56:13.463434348 +0000 UTC m=+0.087375958 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:56:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:14 compute-1 ceph-mon[81715]: pgmap v2656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:14 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:14 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:15 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:16 compute-1 ceph-mon[81715]: pgmap v2657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:16 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:17 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:18 compute-1 ceph-mon[81715]: pgmap v2658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:18 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:56:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:56:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:20.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:20.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:23 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:23 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:23 compute-1 ceph-mon[81715]: pgmap v2659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:23 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:24.142 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:56:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:24.144 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:56:24 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-1 ceph-mon[81715]: pgmap v2660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:24 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-1 ceph-mon[81715]: pgmap v2661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:24 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:26 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-1 ceph-mon[81715]: pgmap v2662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:26 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:26.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:27.147 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:56:27 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:28 compute-1 ceph-mon[81715]: pgmap v2663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:28 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:28 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:29 compute-1 podman[242646]: 2026-01-22 14:56:29.132864347 +0000 UTC m=+0.113390470 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 22 14:56:29 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:29 compute-1 ceph-mon[81715]: pgmap v2664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:29 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:31 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:32 compute-1 ceph-mon[81715]: pgmap v2665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:32 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.397438) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792397482, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2023, "num_deletes": 251, "total_data_size": 3987062, "memory_usage": 4042832, "flush_reason": "Manual Compaction"}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792414298, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2598388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78459, "largest_seqno": 80477, "table_properties": {"data_size": 2590610, "index_size": 4335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19959, "raw_average_key_size": 21, "raw_value_size": 2573537, "raw_average_value_size": 2749, "num_data_blocks": 186, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093653, "oldest_key_time": 1769093653, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 16902 microseconds, and 7634 cpu microseconds.
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414344) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2598388 bytes OK
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414365) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416019) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416046) EVENT_LOG_v1 {"time_micros": 1769093792416038, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3977642, prev total WAL file size 3977642, number of live WAL files 2.
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2537KB)], [162(10MB)]
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792417867, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13241682, "oldest_snapshot_seqno": -1}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 12759 keys, 11618584 bytes, temperature: kUnknown
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792495470, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11618584, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11548482, "index_size": 37093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 350863, "raw_average_key_size": 27, "raw_value_size": 11330620, "raw_average_value_size": 888, "num_data_blocks": 1348, "num_entries": 12759, "num_filter_entries": 12759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.495767) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11618584 bytes
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.496936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.2 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.5) OK, records in: 13276, records dropped: 517 output_compression: NoCompression
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.496953) EVENT_LOG_v1 {"time_micros": 1769093792496946, "job": 104, "event": "compaction_finished", "compaction_time_micros": 77703, "compaction_time_cpu_micros": 33165, "output_level": 6, "num_output_files": 1, "total_output_size": 11618584, "num_input_records": 13276, "num_output_records": 12759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792497511, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792499369, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:32.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:33 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:34 compute-1 ceph-mon[81715]: pgmap v2666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:34 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:34 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:34.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:35 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:36 compute-1 ceph-mon[81715]: pgmap v2667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:36 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:36.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:38 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:38 compute-1 ceph-mon[81715]: pgmap v2668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:39 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:39 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:39 compute-1 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:40 compute-1 ceph-mon[81715]: pgmap v2669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:40 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:41 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:42 compute-1 ceph-mon[81715]: pgmap v2670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:42 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:43 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:44 compute-1 podman[242672]: 2026-01-22 14:56:44.054745884 +0000 UTC m=+0.051027478 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:56:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:45 compute-1 ceph-mon[81715]: pgmap v2671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:45 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:45 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:46 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:46 compute-1 ceph-mon[81715]: pgmap v2672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:46 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:46.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:56:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:56:47 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:49 compute-1 ceph-mon[81715]: pgmap v2673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:49 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:50 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:50 compute-1 ceph-mon[81715]: pgmap v2674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:50 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:50 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:51 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:53 compute-1 ceph-mon[81715]: pgmap v2675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:53 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:53 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:54 compute-1 ceph-mon[81715]: pgmap v2676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:54 compute-1 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:55 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:55 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:56 compute-1 ceph-mon[81715]: pgmap v2677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:56 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:57 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:58 compute-1 ceph-mon[81715]: pgmap v2678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:58 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:56:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:58.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:00 compute-1 podman[242693]: 2026-01-22 14:57:00.134945291 +0000 UTC m=+0.131370787 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:57:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:00 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:00 compute-1 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:00.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:01 compute-1 ceph-mon[81715]: pgmap v2679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 14:57:01 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:01 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:02 compute-1 ceph-mon[81715]: pgmap v2680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:57:02 compute-1 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:02.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:03 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:04 compute-1 ceph-mon[81715]: pgmap v2681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:57:04 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:04 compute-1 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:04.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:05 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:05 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:06 compute-1 ceph-mon[81715]: pgmap v2682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 22 14:57:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:07 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:08 compute-1 ceph-mon[81715]: pgmap v2683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:08 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:08.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:09 compute-1 ceph-mon[81715]: pgmap v2684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:09 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:10 compute-1 sudo[242719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:10 compute-1 sudo[242719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:10 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:10 compute-1 sudo[242719]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:10 compute-1 sudo[242744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:57:10 compute-1 sudo[242744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:10 compute-1 sudo[242744]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:11 compute-1 sudo[242769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:11 compute-1 sudo[242769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:11 compute-1 sudo[242769]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:11 compute-1 sudo[242794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:57:11 compute-1 sudo[242794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:11 compute-1 sudo[242794]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:12 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-1 ceph-mon[81715]: pgmap v2685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:12 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:12 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:57:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:57:13 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:14 compute-1 ceph-mon[81715]: pgmap v2686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 14:57:14 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:15 compute-1 podman[242851]: 2026-01-22 14:57:15.079470059 +0000 UTC m=+0.063124345 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:57:15 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:15 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:16 compute-1 ceph-mon[81715]: pgmap v2687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 14:57:16 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:17 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:18 compute-1 ceph-mon[81715]: pgmap v2688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Jan 22 14:57:18 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:19 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:19 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:20 compute-1 sudo[242871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:20 compute-1 sudo[242871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:20 compute-1 sudo[242871]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:20 compute-1 sudo[242896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:57:20 compute-1 sudo[242896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:20 compute-1 sudo[242896]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:20.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:20 compute-1 ceph-mon[81715]: pgmap v2689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:20 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:21 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:21 compute-1 ceph-mon[81715]: pgmap v2690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:23 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:24 compute-1 ceph-mon[81715]: pgmap v2691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:24 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:24 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:25 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:26 compute-1 ceph-mon[81715]: pgmap v2692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:26 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.362 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:57:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.363 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:57:26 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.364 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:57:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:27 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:28 compute-1 ceph-mon[81715]: pgmap v2693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:28 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:28.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:29 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:29 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:30 compute-1 ceph-mon[81715]: pgmap v2694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:30 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:30.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:31 compute-1 podman[242921]: 2026-01-22 14:57:31.107130906 +0000 UTC m=+0.076968888 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 14:57:31 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:32 compute-1 ceph-mon[81715]: pgmap v2695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:32 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:33 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:34 compute-1 ceph-mon[81715]: pgmap v2696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:34 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:34 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:35 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:36 compute-1 ceph-mon[81715]: pgmap v2697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:36 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:37 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:38 compute-1 ceph-mon[81715]: pgmap v2698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:38 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:39 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:39 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:40 compute-1 ceph-mon[81715]: pgmap v2699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:40 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:41 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:41 compute-1 ceph-mon[81715]: pgmap v2700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:42 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:43 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:43 compute-1 ceph-mon[81715]: pgmap v2701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:45 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:45 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:46 compute-1 podman[242948]: 2026-01-22 14:57:46.067617855 +0000 UTC m=+0.053917127 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:57:46 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:46 compute-1 ceph-mon[81715]: pgmap v2702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:46 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:47 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:57:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:57:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:48.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:48 compute-1 ceph-mon[81715]: pgmap v2703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:48 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:49 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:49 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:50 compute-1 ceph-mon[81715]: pgmap v2704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:50 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:51 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:52 compute-1 ceph-mon[81715]: pgmap v2705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:52 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:53 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:53 compute-1 ceph-mon[81715]: pgmap v2706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:54 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:54 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:55 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:55 compute-1 ceph-mon[81715]: pgmap v2707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:56 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 22 14:57:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:57 compute-1 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:57 compute-1 ceph-mon[81715]: pgmap v2708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:57 compute-1 ceph-mon[81715]: osdmap e163: 3 total, 3 up, 3 in
Jan 22 14:57:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:59 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.022553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879022862, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1371, "num_deletes": 250, "total_data_size": 2525957, "memory_usage": 2569128, "flush_reason": "Manual Compaction"}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 22 14:57:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:57:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879031225, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1060655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80482, "largest_seqno": 81848, "table_properties": {"data_size": 1056089, "index_size": 1833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14035, "raw_average_key_size": 21, "raw_value_size": 1045330, "raw_average_value_size": 1615, "num_data_blocks": 80, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093793, "oldest_key_time": 1769093793, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8501 microseconds, and 4424 cpu microseconds.
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.031262) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1060655 bytes OK
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.031278) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032629) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032641) EVENT_LOG_v1 {"time_micros": 1769093879032637, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032656) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2519302, prev total WAL file size 2519302, number of live WAL files 2.
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.033837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1035KB)], [165(11MB)]
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879033870, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 12679239, "oldest_snapshot_seqno": -1}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 12927 keys, 9372842 bytes, temperature: kUnknown
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879100585, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 9372842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9305576, "index_size": 33873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32325, "raw_key_size": 355122, "raw_average_key_size": 27, "raw_value_size": 9088639, "raw_average_value_size": 703, "num_data_blocks": 1214, "num_entries": 12927, "num_filter_entries": 12927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.101009) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9372842 bytes
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.102837) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 13406, records dropped: 479 output_compression: NoCompression
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.102868) EVENT_LOG_v1 {"time_micros": 1769093879102853, "job": 106, "event": "compaction_finished", "compaction_time_micros": 66877, "compaction_time_cpu_micros": 27378, "output_level": 6, "num_output_files": 1, "total_output_size": 9372842, "num_input_records": 13406, "num_output_records": 12927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879103368, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879107646, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.033770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:00 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:00 compute-1 ceph-mon[81715]: pgmap v2710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Jan 22 14:58:00 compute-1 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:00 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 22 14:58:01 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:02 compute-1 podman[242967]: 2026-01-22 14:58:02.087494961 +0000 UTC m=+0.079115856 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:58:02 compute-1 ceph-mon[81715]: pgmap v2711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:58:02 compute-1 ceph-mon[81715]: osdmap e164: 3 total, 3 up, 3 in
Jan 22 14:58:02 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:03 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:04 compute-1 ceph-mon[81715]: pgmap v2713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Jan 22 14:58:04 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:04 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:05 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:06 compute-1 ceph-mon[81715]: pgmap v2714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.1 KiB/s wr, 23 op/s
Jan 22 14:58:06 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 22 14:58:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:07 compute-1 ceph-mon[81715]: osdmap e165: 3 total, 3 up, 3 in
Jan 22 14:58:07 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:08 compute-1 ceph-mon[81715]: pgmap v2716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 20 op/s
Jan 22 14:58:08 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:09.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:09 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:09 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:10 compute-1 ceph-mon[81715]: pgmap v2717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Jan 22 14:58:10 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:11.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:11 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:12 compute-1 ceph-mon[81715]: pgmap v2718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 22 14:58:12 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:13.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:13 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:14 compute-1 ceph-mon[81715]: pgmap v2719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Jan 22 14:58:14 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:14 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:15 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:16 compute-1 ceph-mon[81715]: pgmap v2720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 26 op/s
Jan 22 14:58:16 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:17 compute-1 podman[242993]: 2026-01-22 14:58:17.052943745 +0000 UTC m=+0.044028918 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 14:58:17 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:17 compute-1 ceph-mon[81715]: pgmap v2721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Jan 22 14:58:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:18 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:58:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:58:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:19 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:19 compute-1 ceph-mon[81715]: pgmap v2722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Jan 22 14:58:19 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:20 compute-1 sudo[243012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:20 compute-1 sudo[243012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-1 sudo[243012]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-1 sudo[243037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:58:20 compute-1 sudo[243037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-1 sudo[243037]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-1 sudo[243062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:20 compute-1 sudo[243062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-1 sudo[243062]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-1 sudo[243087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:58:20 compute-1 sudo[243087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-1 sudo[243087]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:20 compute-1 sudo[243132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:20 compute-1 sudo[243132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-1 sudo[243132]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:21 compute-1 sudo[243157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:58:21 compute-1 sudo[243157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:21 compute-1 sudo[243157]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:21 compute-1 sudo[243182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:21 compute-1 sudo[243182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:21 compute-1 sudo[243182]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:21 compute-1 sudo[243207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:58:21 compute-1 sudo[243207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:21 compute-1 sudo[243207]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:21 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:21 compute-1 ceph-mon[81715]: pgmap v2723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:58:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:58:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:22 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:23.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:24 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:24 compute-1 ceph-mon[81715]: pgmap v2724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:25.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:25 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:25 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:25 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:26 compute-1 ceph-mon[81715]: pgmap v2725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:26 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:27.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:27 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:28 compute-1 ceph-mon[81715]: pgmap v2726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:28 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:29 compute-1 sudo[243263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:29 compute-1 sudo[243263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:29 compute-1 sudo[243263]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:29 compute-1 sudo[243288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:58:29 compute-1 sudo[243288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:29 compute-1 sudo[243288]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:29 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:29 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:29 compute-1 ceph-mon[81715]: pgmap v2727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:29 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:31 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:31 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:31.487 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:58:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:31.489 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:58:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:32 compute-1 ceph-mon[81715]: pgmap v2728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:32 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:33.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:33 compute-1 podman[243313]: 2026-01-22 14:58:33.143209093 +0000 UTC m=+0.110636187 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:58:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:33 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:34 compute-1 ceph-mon[81715]: pgmap v2729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:34 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:34 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 14:58:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:35.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:35 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:35.491 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:58:35 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:35 compute-1 ceph-mon[81715]: osdmap e166: 3 total, 3 up, 3 in
Jan 22 14:58:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 14:58:36 compute-1 ceph-mon[81715]: pgmap v2731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 102 B/s rd, 0 op/s
Jan 22 14:58:36 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:37.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:37 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:37 compute-1 ceph-mon[81715]: osdmap e167: 3 total, 3 up, 3 in
Jan 22 14:58:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 14:58:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:38 compute-1 ceph-mon[81715]: pgmap v2733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Jan 22 14:58:38 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:38 compute-1 ceph-mon[81715]: osdmap e168: 3 total, 3 up, 3 in
Jan 22 14:58:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:39.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:39 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:39 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:40 compute-1 ceph-mon[81715]: pgmap v2735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 5.0 KiB/s wr, 89 op/s
Jan 22 14:58:40 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:41 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:42 compute-1 ceph-mon[81715]: pgmap v2736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 6.6 KiB/s wr, 117 op/s
Jan 22 14:58:42 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:43.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:43 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 14:58:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:44 compute-1 ceph-mon[81715]: pgmap v2737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 5.2 KiB/s wr, 93 op/s
Jan 22 14:58:44 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:44 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:44 compute-1 ceph-mon[81715]: osdmap e169: 3 total, 3 up, 3 in
Jan 22 14:58:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:46 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:46 compute-1 ceph-mon[81715]: pgmap v2739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Jan 22 14:58:46 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:47.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:58:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:58:47 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:48 compute-1 podman[243340]: 2026-01-22 14:58:48.08077068 +0000 UTC m=+0.070204096 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:58:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:48 compute-1 ceph-mon[81715]: pgmap v2740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Jan 22 14:58:48 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:49.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:49 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:49 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:50 compute-1 ceph-mon[81715]: pgmap v2741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 22 14:58:50 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:51.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:51 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:52 compute-1 ceph-mon[81715]: pgmap v2742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:52 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:53.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:53 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.047963) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934047990, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1057, "num_deletes": 259, "total_data_size": 1746764, "memory_usage": 1778752, "flush_reason": "Manual Compaction"}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934056502, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1147709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81853, "largest_seqno": 82905, "table_properties": {"data_size": 1143055, "index_size": 2113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11768, "raw_average_key_size": 20, "raw_value_size": 1132988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093880, "oldest_key_time": 1769093880, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8592 microseconds, and 3922 cpu microseconds.
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.056553) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1147709 bytes OK
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.056569) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058566) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058589) EVENT_LOG_v1 {"time_micros": 1769093934058582, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1741350, prev total WAL file size 1741350, number of live WAL files 2.
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.059709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1120KB)], [168(9153KB)]
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934059749, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 10520551, "oldest_snapshot_seqno": -1}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 12967 keys, 10366624 bytes, temperature: kUnknown
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934139023, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10366624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297790, "index_size": 35313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32453, "raw_key_size": 357307, "raw_average_key_size": 27, "raw_value_size": 10078884, "raw_average_value_size": 777, "num_data_blocks": 1270, "num_entries": 12967, "num_filter_entries": 12967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.139317) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10366624 bytes
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.142998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.6 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.9 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.2) write-amplify(9.0) OK, records in: 13502, records dropped: 535 output_compression: NoCompression
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.143049) EVENT_LOG_v1 {"time_micros": 1769093934143028, "job": 108, "event": "compaction_finished", "compaction_time_micros": 79344, "compaction_time_cpu_micros": 49816, "output_level": 6, "num_output_files": 1, "total_output_size": 10366624, "num_input_records": 13502, "num_output_records": 12967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934143865, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934147831, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.059586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:54 compute-1 ceph-mon[81715]: pgmap v2743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:54 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:54 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:55 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:56 compute-1 ceph-mon[81715]: pgmap v2744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:56 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:58:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:58:57 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:57 compute-1 ceph-mon[81715]: pgmap v2745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:58 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:58:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:59.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:59 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:59 compute-1 ceph-mon[81715]: pgmap v2746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:59 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:00.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:00 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:01 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:01 compute-1 ceph-mon[81715]: pgmap v2747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:02 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:03.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:04 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-1 ceph-mon[81715]: pgmap v2748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:04 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-1 podman[243359]: 2026-01-22 14:59:04.170648838 +0000 UTC m=+0.153377771 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:59:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:05 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:05 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:06 compute-1 ceph-mon[81715]: pgmap v2749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:06 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:07 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:08 compute-1 ceph-mon[81715]: pgmap v2750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:08 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:09 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:09 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:10 compute-1 ceph-mon[81715]: pgmap v2751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:10 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:11 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:11.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:12 compute-1 ceph-mon[81715]: pgmap v2752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:12 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:13 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:14 compute-1 ceph-mon[81715]: pgmap v2753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:14 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:14 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:15 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:16 compute-1 ceph-mon[81715]: pgmap v2754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:16 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:17.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:17 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.315033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957315082, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 554, "num_deletes": 251, "total_data_size": 659738, "memory_usage": 671008, "flush_reason": "Manual Compaction"}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957321917, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 432980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82910, "largest_seqno": 83459, "table_properties": {"data_size": 430246, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7232, "raw_average_key_size": 19, "raw_value_size": 424544, "raw_average_value_size": 1141, "num_data_blocks": 31, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093934, "oldest_key_time": 1769093934, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 6985 microseconds, and 4099 cpu microseconds.
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.322008) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 432980 bytes OK
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.322050) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323429) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323461) EVENT_LOG_v1 {"time_micros": 1769093957323450, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 656473, prev total WAL file size 656473, number of live WAL files 2.
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.324376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(422KB)], [171(10123KB)]
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957324438, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 10799604, "oldest_snapshot_seqno": -1}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 12828 keys, 9181788 bytes, temperature: kUnknown
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957373928, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 9181788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9114770, "index_size": 33817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 355261, "raw_average_key_size": 27, "raw_value_size": 8898836, "raw_average_value_size": 693, "num_data_blocks": 1203, "num_entries": 12828, "num_filter_entries": 12828, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.374318) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9181788 bytes
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.376030) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.7 rd, 185.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(46.1) write-amplify(21.2) OK, records in: 13339, records dropped: 511 output_compression: NoCompression
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.376061) EVENT_LOG_v1 {"time_micros": 1769093957376047, "job": 110, "event": "compaction_finished", "compaction_time_micros": 49610, "compaction_time_cpu_micros": 24882, "output_level": 6, "num_output_files": 1, "total_output_size": 9181788, "num_input_records": 13339, "num_output_records": 12828, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957376363, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957379859, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.324303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:18 compute-1 ceph-mon[81715]: pgmap v2755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:18 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:19 compute-1 podman[243387]: 2026-01-22 14:59:19.11315026 +0000 UTC m=+0.096746492 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 14:59:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:19.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:59:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:59:19 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:19 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:20 compute-1 ceph-mon[81715]: pgmap v2756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:20 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:21 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:22 compute-1 ceph-mon[81715]: pgmap v2757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:22 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:23.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:24 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-1 ceph-mon[81715]: pgmap v2758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:24 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:25.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:25 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:26 compute-1 ceph-mon[81715]: pgmap v2759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:26 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:27 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:29 compute-1 sshd-session[243407]: Received disconnect from 116.169.59.117 port 35004:11:  [preauth]
Jan 22 14:59:29 compute-1 sshd-session[243407]: Disconnected from authenticating user root 116.169.59.117 port 35004 [preauth]
Jan 22 14:59:30 compute-1 ceph-mon[81715]: pgmap v2760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:30 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:30 compute-1 sudo[243409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:30 compute-1 sudo[243409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-1 sudo[243409]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-1 sudo[243434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:59:30 compute-1 sudo[243434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-1 sudo[243434]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-1 sudo[243459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:30 compute-1 sudo[243459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-1 sudo[243459]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-1 sudo[243484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:59:30 compute-1 sudo[243484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:30 compute-1 sudo[243484]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:31 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-1 ceph-mon[81715]: pgmap v2761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:31 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:31 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:59:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:59:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:32 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:32.546 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:59:32 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:32.548 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:59:32 compute-1 ceph-mon[81715]: pgmap v2762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:32 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:33.549 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:59:33 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:33 compute-1 ceph-mon[81715]: pgmap v2763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:59:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:59:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:34 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:59:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:59:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:59:35 compute-1 podman[243539]: 2026-01-22 14:59:35.085094186 +0000 UTC m=+0.076445904 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:59:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:35.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:35 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:35 compute-1 ceph-mon[81715]: pgmap v2764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:35 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:36.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:36 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:37.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:37 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:37 compute-1 ceph-mon[81715]: pgmap v2765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:39 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:39.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:40 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:40 compute-1 ceph-mon[81715]: pgmap v2766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:40 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:40 compute-1 sudo[243566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:40 compute-1 sudo[243566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:40 compute-1 sudo[243566]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:40 compute-1 sudo[243591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:59:40 compute-1 sudo[243591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:40 compute-1 sudo[243591]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:41 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:41 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:41.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:42 compute-1 ceph-mon[81715]: pgmap v2767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:42 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:43 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:44 compute-1 ceph-mon[81715]: pgmap v2768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:44 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:45 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:45 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:46 compute-1 ceph-mon[81715]: pgmap v2769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:46 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:47 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:47.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:59:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:59:48 compute-1 ceph-mon[81715]: pgmap v2770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:48 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:49.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:49 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:50 compute-1 podman[243616]: 2026-01-22 14:59:50.113401866 +0000 UTC m=+0.096358792 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 14:59:50 compute-1 ceph-mon[81715]: pgmap v2771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:50 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:50 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:51.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:51 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:52 compute-1 ceph-mon[81715]: pgmap v2772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:52 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:53.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:53 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:54 compute-1 ceph-mon[81715]: pgmap v2773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:54 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:59:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:59:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:55 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:55 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:56 compute-1 ceph-mon[81715]: pgmap v2774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:56 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:57 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:58 compute-1 ceph-mon[81715]: pgmap v2775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:58 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 14:59:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:59 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:00 compute-1 ceph-mon[81715]: pgmap v2776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:00 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 15:00:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 15:00:00 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:01 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:02 compute-1 ceph-mon[81715]: pgmap v2777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:02 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:03.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:03 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:04 compute-1 ceph-mon[81715]: pgmap v2778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:04 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:05.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:05 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:05 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:06 compute-1 podman[243635]: 2026-01-22 15:00:06.101828826 +0000 UTC m=+0.087766409 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 15:00:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:06 compute-1 ceph-mon[81715]: pgmap v2779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:06 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:07.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:07 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:08 compute-1 ceph-mon[81715]: pgmap v2780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:08 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:09 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:10 compute-1 ceph-mon[81715]: pgmap v2781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:10 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:10 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:11 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:12 compute-1 ceph-mon[81715]: pgmap v2782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:12 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:13.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:13 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:14 compute-1 ceph-mon[81715]: pgmap v2783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:14 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:15.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:16 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:16 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:16 compute-1 ceph-mon[81715]: pgmap v2784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:17 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:17 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:18 compute-1 ceph-mon[81715]: pgmap v2785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:18 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:19 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:20 compute-1 ceph-mon[81715]: pgmap v2786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:20 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:20 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:21 compute-1 podman[243662]: 2026-01-22 15:00:21.091626055 +0000 UTC m=+0.070071081 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 15:00:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:21.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:21 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:22 compute-1 ceph-mon[81715]: pgmap v2787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:22 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:23 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:24 compute-1 ceph-mon[81715]: pgmap v2788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:24 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:25 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:25 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:26 compute-1 ceph-mon[81715]: pgmap v2789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:26 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:27 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:28 compute-1 ceph-mon[81715]: pgmap v2790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:28 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:29 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:30 compute-1 ceph-mon[81715]: pgmap v2791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 22 15:00:30 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:30 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:31 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:32 compute-1 ceph-mon[81715]: pgmap v2792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 15:00:32 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:33 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:33.906 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:00:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:33.908 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:00:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:34 compute-1 ceph-mon[81715]: pgmap v2793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 15:00:34 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:35 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:35 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:00:36 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:00:36 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:36 compute-1 ceph-mon[81715]: pgmap v2794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Jan 22 15:00:36 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:36 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:37 compute-1 podman[243681]: 2026-01-22 15:00:37.119786262 +0000 UTC m=+0.106512225 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:00:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:37 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:38 compute-1 ceph-mon[81715]: pgmap v2795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 938 B/s wr, 23 op/s
Jan 22 15:00:38 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:39 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:40 compute-1 sudo[243708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:40 compute-1 sudo[243708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-1 sudo[243708]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-1 sudo[243733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:00:40 compute-1 sudo[243733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-1 sudo[243733]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-1 sudo[243758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:40 compute-1 sudo[243758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-1 sudo[243758]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-1 sudo[243783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:00:40 compute-1 sudo[243783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:40.909 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:00:40 compute-1 ceph-mon[81715]: pgmap v2796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 22 15:00:40 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:40 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:41 compute-1 sudo[243783]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:41 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:41 compute-1 ceph-mon[81715]: pgmap v2797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 767 B/s wr, 25 op/s
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:00:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:00:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:42 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:43 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:43 compute-1 ceph-mon[81715]: pgmap v2798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 15:00:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:44 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:45 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:45 compute-1 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:45 compute-1 ceph-mon[81715]: pgmap v2799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 15:00:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:47 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:00:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:00:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:00:48 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:48 compute-1 ceph-mon[81715]: pgmap v2800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:00:48 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:48 compute-1 sudo[243837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:48 compute-1 sudo[243837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:48 compute-1 sudo[243837]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:48 compute-1 sudo[243862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:00:48 compute-1 sudo[243862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:48 compute-1 sudo[243862]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:49 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:50 compute-1 ceph-mon[81715]: pgmap v2801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 255 B/s wr, 5 op/s
Jan 22 15:00:50 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:50 compute-1 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:51 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:52 compute-1 podman[243887]: 2026-01-22 15:00:52.061854604 +0000 UTC m=+0.053625108 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:00:52 compute-1 ceph-mon[81715]: pgmap v2802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 15:00:52 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:53 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:53.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:54 compute-1 ceph-mon[81715]: pgmap v2803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:54 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:55 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:55 compute-1 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:55.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:56 compute-1 ceph-mon[81715]: pgmap v2804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:56 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:57 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:58 compute-1 ceph-mon[81715]: pgmap v2805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:58 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:00:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:59 compute-1 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:01:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:00 compute-1 ceph-mon[81715]: pgmap v2806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:00 compute-1 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:00 compute-1 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:01.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:01 compute-1 CROND[243908]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 15:01:01 compute-1 run-parts[243911]: (/etc/cron.hourly) starting 0anacron
Jan 22 15:01:01 compute-1 run-parts[243917]: (/etc/cron.hourly) finished 0anacron
Jan 22 15:01:01 compute-1 CROND[243907]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 15:01:01 compute-1 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:02 compute-1 ceph-mon[81715]: pgmap v2807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:02 compute-1 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:03.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:03 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:04 compute-1 ceph-mon[81715]: pgmap v2808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:04 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:05.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:05 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:05 compute-1 ceph-mon[81715]: Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:06 compute-1 ceph-mon[81715]: pgmap v2809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:06 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:07 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:08 compute-1 podman[243918]: 2026-01-22 15:01:08.131797062 +0000 UTC m=+0.114217963 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:01:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:08 compute-1 ceph-mon[81715]: pgmap v2810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:08 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:09 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:09 compute-1 ceph-mon[81715]: pgmap v2811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:10 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:10 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:11 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:11 compute-1 ceph-mon[81715]: pgmap v2812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:13 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:14 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:14 compute-1 ceph-mon[81715]: pgmap v2813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:15 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:15.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:16 compute-1 ceph-mon[81715]: pgmap v2814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:16 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:17 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:18 compute-1 ceph-mon[81715]: pgmap v2815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:18 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:01:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:01:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:01:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:01:19 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:01:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:01:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:20 compute-1 ceph-mon[81715]: pgmap v2816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 22 15:01:20 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:20 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:21 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:22 compute-1 ceph-mon[81715]: pgmap v2817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:22 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:23 compute-1 podman[243944]: 2026-01-22 15:01:23.069674242 +0000 UTC m=+0.057703849 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 22 15:01:23 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:24 compute-1 ceph-mon[81715]: pgmap v2818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:24 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:24.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:25 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:25 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:26 compute-1 ceph-mon[81715]: pgmap v2819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:26 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:26.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:27 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:28 compute-1 ceph-mon[81715]: pgmap v2820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 22 KiB/s wr, 7 op/s
Jan 22 15:01:28 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:29 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:30 compute-1 ceph-mon[81715]: pgmap v2821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 22 15:01:30 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:30 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:31 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:32 compute-1 ceph-mon[81715]: pgmap v2822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 18 op/s
Jan 22 15:01:32 compute-1 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:33.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:33 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:01:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:01:34 compute-1 ceph-mon[81715]: pgmap v2823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:34 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:01:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:35.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:01:36 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:36 compute-1 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:36 compute-1 ceph-mon[81715]: pgmap v2824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:01:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:01:37 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:38 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:38 compute-1 ceph-mon[81715]: pgmap v2825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:39 compute-1 podman[243965]: 2026-01-22 15:01:39.131390008 +0000 UTC m=+0.128284432 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:01:39 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:39 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:41 compute-1 ceph-mon[81715]: pgmap v2826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 597 B/s wr, 12 op/s
Jan 22 15:01:41 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:41 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:42 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-1 ceph-mon[81715]: pgmap v2827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 15:01:42 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:01:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:01:43 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:43.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:44 compute-1 ceph-mon[81715]: pgmap v2828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:44 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:45 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:45 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:45.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:46 compute-1 ceph-mon[81715]: pgmap v2829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:46 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:01:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:01:47 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:01:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:01:48 compute-1 ceph-mon[81715]: pgmap v2830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:48 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:48 compute-1 sudo[243991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:48 compute-1 sudo[243991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-1 sudo[243991]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-1 sudo[244016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:01:48 compute-1 sudo[244016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-1 sudo[244016]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-1 sudo[244041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:48 compute-1 sudo[244041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-1 sudo[244041]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-1 sudo[244066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:01:48 compute-1 sudo[244066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:49 compute-1 sudo[244066]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:49 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:50 compute-1 ceph-mon[81715]: pgmap v2831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:01:50 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:01:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:01:50 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:01:50.437 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:01:50 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:01:50.440 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:01:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:51 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:52 compute-1 ceph-mon[81715]: pgmap v2832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:01:52 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:53 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:53 compute-1 sshd-session[244121]: Connection closed by authenticating user root 45.148.10.121 port 45988 [preauth]
Jan 22 15:01:54 compute-1 podman[244123]: 2026-01-22 15:01:54.065584717 +0000 UTC m=+0.056309640 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:01:54 compute-1 ceph-mon[81715]: pgmap v2833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:01:54 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:55 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:55 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:55 compute-1 sudo[244142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:55 compute-1 sudo[244142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:55 compute-1 sudo[244142]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:55 compute-1 sudo[244167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:01:55 compute-1 sudo[244167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:55 compute-1 sudo[244167]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:56 compute-1 ceph-mon[81715]: pgmap v2834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:01:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:56 compute-1 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:57 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:58 compute-1 ceph-mon[81715]: pgmap v2835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:01:58 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:58.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:01:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:59.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:59 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:00.442 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:02:00 compute-1 ceph-mon[81715]: pgmap v2836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:02:00 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:00 compute-1 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:00.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:01 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.908393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121908514, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2446, "num_deletes": 251, "total_data_size": 4781749, "memory_usage": 4875096, "flush_reason": "Manual Compaction"}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121925744, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3108278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83465, "largest_seqno": 85905, "table_properties": {"data_size": 3099212, "index_size": 5239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23322, "raw_average_key_size": 21, "raw_value_size": 3079129, "raw_average_value_size": 2822, "num_data_blocks": 225, "num_entries": 1091, "num_filter_entries": 1091, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093958, "oldest_key_time": 1769093958, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 17333 microseconds, and 7797 cpu microseconds.
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.925780) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3108278 bytes OK
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.925797) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927553) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927567) EVENT_LOG_v1 {"time_micros": 1769094121927563, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927583) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4770604, prev total WAL file size 4770604, number of live WAL files 2.
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.928839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3035KB)], [174(8966KB)]
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121928895, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 12290066, "oldest_snapshot_seqno": -1}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 13402 keys, 10597459 bytes, temperature: kUnknown
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121984398, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 10597459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10526019, "index_size": 36831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 369013, "raw_average_key_size": 27, "raw_value_size": 10299313, "raw_average_value_size": 768, "num_data_blocks": 1325, "num_entries": 13402, "num_filter_entries": 13402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.984737) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10597459 bytes
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.986063) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.1 rd, 190.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 13919, records dropped: 517 output_compression: NoCompression
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.986082) EVENT_LOG_v1 {"time_micros": 1769094121986073, "job": 112, "event": "compaction_finished", "compaction_time_micros": 55588, "compaction_time_cpu_micros": 25663, "output_level": 6, "num_output_files": 1, "total_output_size": 10597459, "num_input_records": 13919, "num_output_records": 13402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121986916, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121988986, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.928765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:01 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:02 compute-1 ceph-mon[81715]: pgmap v2837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:02:02 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:03.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:03 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:04.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:05 compute-1 ceph-mon[81715]: pgmap v2838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:02:05 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:05.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:06 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:06 compute-1 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:06 compute-1 ceph-mon[81715]: pgmap v2839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:02:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:06.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:07 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:07 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:08 compute-1 ceph-mon[81715]: pgmap v2840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:02:08 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:09.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:09 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:10 compute-1 podman[244192]: 2026-01-22 15:02:10.136624725 +0000 UTC m=+0.114374677 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:02:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:11 compute-1 ceph-mon[81715]: pgmap v2841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:11 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:11 compute-1 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:11.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:12 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:12 compute-1 ceph-mon[81715]: pgmap v2842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:12.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:13 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:13 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:14 compute-1 ceph-mon[81715]: pgmap v2843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:14 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:15 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:15 compute-1 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:16 compute-1 ceph-mon[81715]: pgmap v2844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:16 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:17.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:18 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:18 compute-1 ceph-mon[81715]: pgmap v2845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:19 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:02:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:02:19 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:19.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:20 compute-1 ceph-mon[81715]: pgmap v2846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:20 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:20 compute-1 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:20.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:22 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:23 compute-1 ceph-mon[81715]: pgmap v2847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:23 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:23 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:24 compute-1 ceph-mon[81715]: pgmap v2848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:24 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:24.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:25 compute-1 podman[244220]: 2026-01-22 15:02:25.057576447 +0000 UTC m=+0.053544527 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:02:25 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:25 compute-1 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:26 compute-1 ceph-mon[81715]: pgmap v2849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:26 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:28 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:28.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:29 compute-1 ceph-mon[81715]: pgmap v2850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:29 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:29 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:30 compute-1 ceph-mon[81715]: pgmap v2851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:30 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:30 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:31.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:31 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:32.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:33 compute-1 ceph-mon[81715]: pgmap v2852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:33 compute-1 ceph-mon[81715]: 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:02:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:33.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:34 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-1 ceph-mon[81715]: pgmap v2853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:34 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:35 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:35 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:36 compute-1 ceph-mon[81715]: pgmap v2854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:02:36 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:37 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:37.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:38 compute-1 ceph-mon[81715]: pgmap v2855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 15:02:38 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:39 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:40 compute-1 ceph-mon[81715]: pgmap v2856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 15:02:40 compute-1 ceph-mon[81715]: 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:40 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:41 compute-1 podman[244241]: 2026-01-22 15:02:41.072966704 +0000 UTC m=+0.067803931 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:02:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:41.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:42 compute-1 ceph-mon[81715]: 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:02:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:43 compute-1 ceph-mon[81715]: pgmap v2857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 15:02:43 compute-1 ceph-mon[81715]: 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:02:43 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:44 compute-1 ceph-mon[81715]: pgmap v2858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 15:02:44 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 15:02:45 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4231267640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 15:02:45 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:45 compute-1 ceph-mon[81715]: Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:47 compute-1 ceph-mon[81715]: pgmap v2859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 740 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 874 KiB/s wr, 26 op/s
Jan 22 15:02:47 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4231267640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 15:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:02:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.500 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:02:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.500 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:02:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:47.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 15:02:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 15:02:49 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:49 compute-1 ceph-mon[81715]: pgmap v2860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 15:02:49 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:49 compute-1 ceph-mon[81715]: osdmap e170: 3 total, 3 up, 3 in
Jan 22 15:02:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:50 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:50 compute-1 ceph-mon[81715]: osdmap e171: 3 total, 3 up, 3 in
Jan 22 15:02:50 compute-1 ceph-mon[81715]: pgmap v2863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 2.7 MiB/s wr, 28 op/s
Jan 22 15:02:50 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:50 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 15:02:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:52 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:52 compute-1 ceph-mon[81715]: pgmap v2864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 33 op/s
Jan 22 15:02:52 compute-1 ceph-mon[81715]: osdmap e172: 3 total, 3 up, 3 in
Jan 22 15:02:52 compute-1 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:02:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:53 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:54 compute-1 ceph-mon[81715]: pgmap v2866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 852 B/s wr, 7 op/s
Jan 22 15:02:54 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:55.270 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:02:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:02:55.272 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:02:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:56 compute-1 podman[244268]: 2026-01-22 15:02:56.071624953 +0000 UTC m=+0.059288412 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:02:56 compute-1 sudo[244287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:56 compute-1 sudo[244287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-1 sudo[244287]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-1 sudo[244312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:02:56 compute-1 sudo[244312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:56 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:56 compute-1 sudo[244312]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-1 sudo[244337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:56 compute-1 sudo[244337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-1 sudo[244337]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-1 sudo[244362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:02:56 compute-1 sudo[244362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-1 sudo[244362]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:57.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:57 compute-1 ceph-mon[81715]: pgmap v2867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 782 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 678 KiB/s wr, 10 op/s
Jan 22 15:02:57 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:02:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:02:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:02:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:58 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-1 ceph-mon[81715]: pgmap v2868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:02:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:02:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:02:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:02:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:02:58 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:02:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:02:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:02:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:59.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:59 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 15:03:00 compute-1 ceph-mon[81715]: pgmap v2869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 22 15:03:00 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:00 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:00 compute-1 ceph-mon[81715]: osdmap e173: 3 total, 3 up, 3 in
Jan 22 15:03:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:01 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:01.274 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:03:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:01.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:01 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:02 compute-1 ceph-mon[81715]: pgmap v2871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 22 op/s
Jan 22 15:03:02 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:03 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:03:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:03:04 compute-1 sudo[244418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:04 compute-1 sudo[244418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:04 compute-1 sudo[244418]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:04 compute-1 sudo[244443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:03:04 compute-1 sudo[244443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:04 compute-1 sudo[244443]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:04 compute-1 ceph-mon[81715]: pgmap v2872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Jan 22 15:03:04 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:06 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:06 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:07 compute-1 ceph-mon[81715]: pgmap v2873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 22 15:03:07 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:07 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:08 compute-1 ceph-mon[81715]: pgmap v2874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:08 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:09 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:10 compute-1 ceph-mon[81715]: pgmap v2875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:10 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:10 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:11 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:12 compute-1 podman[244468]: 2026-01-22 15:03:12.102553639 +0000 UTC m=+0.096372542 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:03:12 compute-1 ceph-mon[81715]: pgmap v2876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 185 B/s rd, 92 B/s wr, 0 op/s
Jan 22 15:03:12 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:13 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:14 compute-1 ceph-mon[81715]: pgmap v2877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:03:14 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:15 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:15 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:16 compute-1 ceph-mon[81715]: pgmap v2878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:16 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:17 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:18 compute-1 ceph-mon[81715]: pgmap v2879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:18 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:18.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:20 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:21 compute-1 ceph-mon[81715]: pgmap v2880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:21 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:21 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:21 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:22 compute-1 ceph-mon[81715]: pgmap v2881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:22 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:22.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:23.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:24 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-1 ceph-mon[81715]: pgmap v2882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 15:03:24 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:25 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:25 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:26 compute-1 ceph-mon[81715]: pgmap v2883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 15:03:26 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:27 compute-1 podman[244496]: 2026-01-22 15:03:27.057901059 +0000 UTC m=+0.052968130 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 15:03:27 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:28 compute-1 ceph-mon[81715]: pgmap v2884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:28 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:29 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.089194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210089249, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 257, "total_data_size": 2542825, "memory_usage": 2590016, "flush_reason": "Manual Compaction"}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210106283, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1671345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85910, "largest_seqno": 87284, "table_properties": {"data_size": 1665616, "index_size": 2868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14497, "raw_average_key_size": 20, "raw_value_size": 1653115, "raw_average_value_size": 2361, "num_data_blocks": 124, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094122, "oldest_key_time": 1769094122, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 17144 microseconds, and 8837 cpu microseconds.
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106342) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1671345 bytes OK
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106365) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108005) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108022) EVENT_LOG_v1 {"time_micros": 1769094210108016, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2536071, prev total WAL file size 2536071, number of live WAL files 2.
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303232' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end)
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1632KB)], [177(10MB)]
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210108972, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 12268804, "oldest_snapshot_seqno": -1}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 13571 keys, 12122300 bytes, temperature: kUnknown
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210172859, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 12122300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12048170, "index_size": 39073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33989, "raw_key_size": 374189, "raw_average_key_size": 27, "raw_value_size": 11816867, "raw_average_value_size": 870, "num_data_blocks": 1415, "num_entries": 13571, "num_filter_entries": 13571, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.173329) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12122300 bytes
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.175148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.6 rd, 189.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 14102, records dropped: 531 output_compression: NoCompression
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.175191) EVENT_LOG_v1 {"time_micros": 1769094210175175, "job": 114, "event": "compaction_finished", "compaction_time_micros": 64025, "compaction_time_cpu_micros": 30047, "output_level": 6, "num_output_files": 1, "total_output_size": 12122300, "num_input_records": 14102, "num_output_records": 13571, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210176214, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210177867, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-1 ceph-mon[81715]: pgmap v2885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 22 15:03:30 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:30 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:31.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:31 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:32 compute-1 ceph-mon[81715]: pgmap v2886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:32 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:33.238 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:03:33 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:33.239 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:03:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:33 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:34 compute-1 ceph-mon[81715]: pgmap v2887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:34 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:35.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:35 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:35 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:36 compute-1 ceph-mon[81715]: pgmap v2888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:36 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:37.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:37 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:38 compute-1 ceph-mon[81715]: pgmap v2889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:38 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:39.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:39 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:40 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:40.241 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:03:40 compute-1 ceph-mon[81715]: pgmap v2890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:40 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:40 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:41.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:41 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:42 compute-1 ceph-mon[81715]: pgmap v2891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 22 15:03:42 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:43 compute-1 podman[244516]: 2026-01-22 15:03:43.130521115 +0000 UTC m=+0.102563947 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:03:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:43.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:43 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:44 compute-1 ceph-mon[81715]: pgmap v2892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:44 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:45 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:45 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:46 compute-1 ceph-mon[81715]: pgmap v2893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:46 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.501 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.501 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:03:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:03:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:47 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:48 compute-1 ceph-mon[81715]: pgmap v2894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:48 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:49.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:50 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:51 compute-1 ceph-mon[81715]: pgmap v2895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:51 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:51 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:51.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:52 compute-1 ceph-mon[81715]: pgmap v2896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:52 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:53 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:54 compute-1 ceph-mon[81715]: pgmap v2897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:54 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:55 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:55 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:03:56 compute-1 ceph-mon[81715]: pgmap v2898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:56 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:57 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:03:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:03:58 compute-1 podman[244543]: 2026-01-22 15:03:58.071907887 +0000 UTC m=+0.064770091 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 15:03:58 compute-1 ceph-mon[81715]: pgmap v2899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:58 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:59 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:03:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:03:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:00 compute-1 ceph-mon[81715]: pgmap v2900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:00 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:00 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:01 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:01.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:02 compute-1 ceph-mon[81715]: pgmap v2901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:02 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:03 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:04 compute-1 ceph-mon[81715]: pgmap v2902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:04 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:04 compute-1 sudo[244562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:04 compute-1 sudo[244562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-1 sudo[244562]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-1 sudo[244587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:04:04 compute-1 sudo[244587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-1 sudo[244587]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-1 sudo[244612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:04 compute-1 sudo[244612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-1 sudo[244612]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-1 sudo[244637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:04:04 compute-1 sudo[244637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-1 sudo[244637]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:05 compute-1 sudo[244693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:05 compute-1 sudo[244693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:05 compute-1 sudo[244693]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-1 sudo[244718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:04:05 compute-1 sudo[244718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:05 compute-1 sudo[244718]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-1 sudo[244743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:05 compute-1 sudo[244743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:05 compute-1 sudo[244743]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-1 sudo[244768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 15:04:05 compute-1 sudo[244768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:05 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:05 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:05 compute-1 sudo[244768]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:06 compute-1 ceph-mon[81715]: pgmap v2903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:04:06 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:04:06 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:07 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:07.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:08 compute-1 ceph-mon[81715]: pgmap v2904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:08 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:09 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:09.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:10 compute-1 ceph-mon[81715]: pgmap v2905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:10 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:10 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:11 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:12 compute-1 sudo[244811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:12 compute-1 sudo[244811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:12 compute-1 sudo[244811]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:12 compute-1 sudo[244836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:04:12 compute-1 sudo[244836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:12 compute-1 sudo[244836]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:12 compute-1 ceph-mon[81715]: pgmap v2906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:12 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:13 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:14 compute-1 podman[244861]: 2026-01-22 15:04:14.121896918 +0000 UTC m=+0.117524562 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 15:04:14 compute-1 ceph-mon[81715]: pgmap v2907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:14 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:15 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:15 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:16 compute-1 ceph-mon[81715]: pgmap v2908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:16 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:17.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:17 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:18 compute-1 ceph-mon[81715]: pgmap v2909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:18 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:04:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:04:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:19.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:19 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:20 compute-1 ceph-mon[81715]: pgmap v2910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 170 B/s wr, 1 op/s
Jan 22 15:04:20 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:20 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:21 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:22 compute-1 ceph-mon[81715]: pgmap v2911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 15:04:22 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:23 compute-1 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:24 compute-1 ceph-mon[81715]: pgmap v2912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 15:04:24 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:25 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:25 compute-1 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:26 compute-1 ceph-mon[81715]: pgmap v2913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 814 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 528 KiB/s wr, 33 op/s
Jan 22 15:04:26 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:27 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:28 compute-1 ceph-mon[81715]: pgmap v2914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 15:04:28 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:29 compute-1 podman[244887]: 2026-01-22 15:04:29.117846741 +0000 UTC m=+0.101256840 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:04:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:29 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:29 compute-1 ceph-mon[81715]: pgmap v2915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 15:04:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:31 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:31 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:32 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:32 compute-1 ceph-mon[81715]: pgmap v2916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 15:04:32 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:33.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:33 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:34 compute-1 ceph-mon[81715]: pgmap v2917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:04:34 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:04:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 14K writes, 45K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 14K writes, 4803 syncs, 3.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1281 writes, 2798 keys, 1281 commit groups, 1.0 writes per commit group, ingest: 1.61 MB, 0.00 MB/s
                                           Interval WAL: 1281 writes, 604 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:04:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:35 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:35 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:36 compute-1 ceph-mon[81715]: pgmap v2918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:04:36 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:37.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:37 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:38 compute-1 ceph-mon[81715]: pgmap v2919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.3 MiB/s wr, 8 op/s
Jan 22 15:04:38 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:39 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:40 compute-1 ceph-mon[81715]: pgmap v2920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:40 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:40 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:41 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:42 compute-1 ceph-mon[81715]: pgmap v2921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:42 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:43 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:44 compute-1 ceph-mon[81715]: pgmap v2922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:44 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:45 compute-1 podman[244906]: 2026-01-22 15:04:45.111295907 +0000 UTC m=+0.100052347 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 15:04:45 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:45 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:46 compute-1 ceph-mon[81715]: pgmap v2923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:46 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:47 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:04:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:04:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:48 compute-1 ceph-mon[81715]: pgmap v2924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:48 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:49 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.512956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289512987, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1341, "num_deletes": 251, "total_data_size": 2359698, "memory_usage": 2388904, "flush_reason": "Manual Compaction"}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289525794, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1539104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87289, "largest_seqno": 88625, "table_properties": {"data_size": 1533722, "index_size": 2585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13910, "raw_average_key_size": 20, "raw_value_size": 1522007, "raw_average_value_size": 2275, "num_data_blocks": 111, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094210, "oldest_key_time": 1769094210, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 13039 microseconds, and 5013 cpu microseconds.
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525990) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1539104 bytes OK
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.526087) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527343) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527365) EVENT_LOG_v1 {"time_micros": 1769094289527357, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2353180, prev total WAL file size 2353180, number of live WAL files 2.
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.529217) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1503KB)], [180(11MB)]
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289529257, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13661404, "oldest_snapshot_seqno": -1}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 13723 keys, 11987791 bytes, temperature: kUnknown
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289637361, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11987791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11912960, "index_size": 39390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34373, "raw_key_size": 378529, "raw_average_key_size": 27, "raw_value_size": 11679362, "raw_average_value_size": 851, "num_data_blocks": 1424, "num_entries": 13723, "num_filter_entries": 13723, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.637704) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11987791 bytes
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.639218) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.3 rd, 110.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 14240, records dropped: 517 output_compression: NoCompression
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.639240) EVENT_LOG_v1 {"time_micros": 1769094289639230, "job": 116, "event": "compaction_finished", "compaction_time_micros": 108185, "compaction_time_cpu_micros": 54446, "output_level": 6, "num_output_files": 1, "total_output_size": 11987791, "num_input_records": 14240, "num_output_records": 13723, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289639726, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289642119, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.529145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:50 compute-1 ceph-mon[81715]: pgmap v2925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:50 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:50 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:51 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:52 compute-1 ceph-mon[81715]: pgmap v2926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:52 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:53 compute-1 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:54 compute-1 ceph-mon[81715]: pgmap v2927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:54 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:55 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:55 compute-1 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:56 compute-1 ceph-mon[81715]: pgmap v2928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:56 compute-1 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:57 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:58 compute-1 ceph-mon[81715]: pgmap v2929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:58 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:04:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:59 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:00 compute-1 podman[244932]: 2026-01-22 15:05:00.095423224 +0000 UTC m=+0.081230217 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:05:00 compute-1 ceph-mon[81715]: pgmap v2930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:00 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:00 compute-1 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:01.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:01 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:02 compute-1 ceph-mon[81715]: pgmap v2931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:02 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:03 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:04 compute-1 ceph-mon[81715]: pgmap v2932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:04 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:05.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:05 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:05 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:06 compute-1 ceph-mon[81715]: pgmap v2933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:06 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:07.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:07.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:07 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:08 compute-1 ceph-mon[81715]: pgmap v2934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:08 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:09 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:10 compute-1 ceph-mon[81715]: pgmap v2935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:10 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:10 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:12 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:12 compute-1 ceph-mon[81715]: pgmap v2936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:12 compute-1 sudo[244951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:12 compute-1 sudo[244951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-1 sudo[244951]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-1 sudo[244976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:05:12 compute-1 sudo[244976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-1 sudo[244976]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-1 sudo[245001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:12 compute-1 sudo[245001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-1 sudo[245001]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-1 sudo[245026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:05:12 compute-1 sudo[245026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:13 compute-1 sudo[245026]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:13 compute-1 sudo[245082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:13 compute-1 sudo[245082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-1 sudo[245082]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-1 sudo[245107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:05:13 compute-1 sudo[245107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-1 sudo[245107]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-1 sudo[245132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:13 compute-1 sudo[245132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-1 sudo[245132]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-1 sudo[245157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 15:05:13 compute-1 sudo[245157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:13 compute-1 podman[245222]: 2026-01-22 15:05:13.975091617 +0000 UTC m=+0.041978621 container create e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 15:05:14 compute-1 systemd[1]: Started libpod-conmon-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope.
Jan 22 15:05:14 compute-1 systemd[1]: Started libcrun container.
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:13.953094721 +0000 UTC m=+0.019981735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:14.060746393 +0000 UTC m=+0.127633417 container init e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:05:14 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:14 compute-1 ceph-mon[81715]: pgmap v2937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:14.06765094 +0000 UTC m=+0.134537934 container start e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:14.071416133 +0000 UTC m=+0.138303127 container attach e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 15:05:14 compute-1 focused_thompson[245238]: 167 167
Jan 22 15:05:14 compute-1 systemd[1]: libpod-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope: Deactivated successfully.
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:14.073626853 +0000 UTC m=+0.140513847 container died e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 15:05:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-5c97f77ababb134115cbeb6a3f514d75364e9ba9fbcd9ee372bd97ef75dc4cb1-merged.mount: Deactivated successfully.
Jan 22 15:05:14 compute-1 podman[245222]: 2026-01-22 15:05:14.109746603 +0000 UTC m=+0.176633597 container remove e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 15:05:14 compute-1 systemd[1]: libpod-conmon-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope: Deactivated successfully.
Jan 22 15:05:14 compute-1 podman[245263]: 2026-01-22 15:05:14.259993453 +0000 UTC m=+0.040083409 container create 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 15:05:14 compute-1 systemd[1]: Started libpod-conmon-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope.
Jan 22 15:05:14 compute-1 systemd[1]: Started libcrun container.
Jan 22 15:05:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 15:05:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 15:05:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 15:05:14 compute-1 podman[245263]: 2026-01-22 15:05:14.242331474 +0000 UTC m=+0.022421490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:05:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 15:05:14 compute-1 podman[245263]: 2026-01-22 15:05:14.354480419 +0000 UTC m=+0.134570415 container init 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 15:05:14 compute-1 podman[245263]: 2026-01-22 15:05:14.360511302 +0000 UTC m=+0.140601268 container start 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 15:05:14 compute-1 podman[245263]: 2026-01-22 15:05:14.364064398 +0000 UTC m=+0.144154374 container attach 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 15:05:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:15.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:15 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:15 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:15.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:16 compute-1 podman[245567]: 2026-01-22 15:05:16.100999649 +0000 UTC m=+0.084719451 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]: [
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:     {
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "available": false,
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "ceph_device": false,
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "lsm_data": {},
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "lvs": [],
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "path": "/dev/sr0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "rejected_reasons": [
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "Has a FileSystem",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "Insufficient space (<5GB)"
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         ],
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         "sys_api": {
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "actuators": null,
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "device_nodes": "sr0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "devname": "sr0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "human_readable_size": "482.00 KB",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "id_bus": "ata",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "model": "QEMU DVD-ROM",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "nr_requests": "2",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "parent": "/dev/sr0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "partitions": {},
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "path": "/dev/sr0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "removable": "1",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "rev": "2.5+",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "ro": "0",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "rotational": "1",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "sas_address": "",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "sas_device_handle": "",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "scheduler_mode": "mq-deadline",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "sectors": 0,
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "sectorsize": "2048",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "size": 493568.0,
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "support_discard": "2048",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "type": "disk",
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:             "vendor": "QEMU"
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:         }
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]:     }
Jan 22 15:05:16 compute-1 thirsty_sinoussi[245280]: ]
Jan 22 15:05:16 compute-1 systemd[1]: libpod-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Deactivated successfully.
Jan 22 15:05:16 compute-1 systemd[1]: libpod-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Consumed 1.118s CPU time.
Jan 22 15:05:16 compute-1 podman[245263]: 2026-01-22 15:05:16.175947964 +0000 UTC m=+1.956037920 container died 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 15:05:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753-merged.mount: Deactivated successfully.
Jan 22 15:05:16 compute-1 podman[245263]: 2026-01-22 15:05:16.225711016 +0000 UTC m=+2.005800972 container remove 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:05:16 compute-1 systemd[1]: libpod-conmon-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Deactivated successfully.
Jan 22 15:05:16 compute-1 sudo[245157]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:16 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:16 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:16 compute-1 ceph-mon[81715]: pgmap v2938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:16 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:17 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:05:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:05:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:17.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:18 compute-1 ceph-mon[81715]: pgmap v2939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:18 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:19.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:20 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:21 compute-1 ceph-mon[81715]: pgmap v2940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:21 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:21 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:21.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:22 compute-1 ceph-mon[81715]: pgmap v2941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:22 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:23.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:23 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:23.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:23 compute-1 sudo[246459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:23 compute-1 sudo[246459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:23 compute-1 sudo[246459]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:24 compute-1 sudo[246484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:05:24 compute-1 sudo[246484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:24 compute-1 sudo[246484]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:24 compute-1 ceph-mon[81715]: pgmap v2942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:24 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:25.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:25 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:25 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:26 compute-1 ceph-mon[81715]: pgmap v2943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:26 compute-1 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:27 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:28 compute-1 ceph-mon[81715]: pgmap v2944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:28 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:29.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:29 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:30 compute-1 ceph-mon[81715]: pgmap v2945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:30 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:30 compute-1 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:31 compute-1 podman[246509]: 2026-01-22 15:05:31.109998106 +0000 UTC m=+0.085981316 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 22 15:05:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:31.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:05:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 89K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1887 writes, 9766 keys, 1887 commit groups, 1.0 writes per commit group, ingest: 16.87 MB, 0.03 MB/s
                                           Interval WAL: 1887 writes, 1887 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.2      1.36              0.32        58    0.023       0      0       0.0       0.0
                                             L6      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.6    143.2    123.7      4.38              1.59        57    0.077    549K    30K       0.0       0.0
                                            Sum      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.6    109.3    111.3      5.74              1.91       115    0.050    549K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    138.2    140.4      0.59              0.29        14    0.042     95K   3607       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    143.2    123.7      4.38              1.59        57    0.077    549K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.3      1.36              0.32        57    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.094, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.62 GB write, 0.12 MB/s write, 0.61 GB read, 0.12 MB/s read, 5.7 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 67.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000458 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3559,64.19 MB,21.1146%) FilterBlock(115,1.48 MB,0.487152%) IndexBlock(115,1.93 MB,0.633526%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:05:31 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:32 compute-1 ceph-mon[81715]: pgmap v2946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:32 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:34 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:35 compute-1 ceph-mon[81715]: pgmap v2947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:35 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:35.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:37 compute-1 ceph-mon[81715]: pgmap v2948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:37 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:38 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:38 compute-1 ceph-mon[81715]: pgmap v2949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:39 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:39 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:40 compute-1 ceph-mon[81715]: pgmap v2950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:40 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:40 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:42 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:43 compute-1 ceph-mon[81715]: pgmap v2951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:43.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:44 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-1 ceph-mon[81715]: pgmap v2952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:44 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:45 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:45 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:46 compute-1 ceph-mon[81715]: pgmap v2953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:46 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:47 compute-1 podman[246528]: 2026-01-22 15:05:47.104474322 +0000 UTC m=+0.095705590 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 15:05:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:05:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:05:47 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:48 compute-1 ceph-mon[81715]: pgmap v2954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:48 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:49.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:49 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:49.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:50 compute-1 ceph-mon[81715]: pgmap v2955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:50 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:50 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:51 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:53 compute-1 ceph-mon[81715]: pgmap v2956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:53 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:54 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:54 compute-1 ceph-mon[81715]: pgmap v2957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:55 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:55.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:56 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:56 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:56 compute-1 ceph-mon[81715]: pgmap v2958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:57 compute-1 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:57.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:58 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:58 compute-1 ceph-mon[81715]: pgmap v2959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:59 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:05:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:59.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:00 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:00 compute-1 ceph-mon[81715]: pgmap v2960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:00 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:00 compute-1 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:01 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:01.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:02 compute-1 podman[246556]: 2026-01-22 15:06:02.054310242 +0000 UTC m=+0.048723863 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:06:02 compute-1 ceph-mon[81715]: pgmap v2961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:03.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:03 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:04 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:04 compute-1 ceph-mon[81715]: pgmap v2962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:05.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:05 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:05 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:05.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:06 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:06 compute-1 ceph-mon[81715]: pgmap v2963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:07 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:08 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:08 compute-1 ceph-mon[81715]: pgmap v2964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:09 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:09.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:10 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:10 compute-1 ceph-mon[81715]: pgmap v2965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:10 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:11 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:06:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:11.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:12 compute-1 ceph-mon[81715]: pgmap v2966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:12 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:13 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:14 compute-1 ceph-mon[81715]: pgmap v2967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:14 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:15 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:15 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:16 compute-1 ceph-mon[81715]: pgmap v2968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:16 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:17 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:18 compute-1 podman[246575]: 2026-01-22 15:06:18.095218938 +0000 UTC m=+0.085530833 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:06:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:06:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:06:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:06:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:06:18 compute-1 ceph-mon[81715]: pgmap v2969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:18 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:06:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:06:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:19.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:20 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.187630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380187707, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 250, "total_data_size": 2768771, "memory_usage": 2825176, "flush_reason": "Manual Compaction"}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380197139, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1191330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88630, "largest_seqno": 90098, "table_properties": {"data_size": 1186443, "index_size": 2090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14800, "raw_average_key_size": 21, "raw_value_size": 1174975, "raw_average_value_size": 1727, "num_data_blocks": 89, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094290, "oldest_key_time": 1769094290, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9529 microseconds, and 3767 cpu microseconds.
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.197169) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1191330 bytes OK
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.197185) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198455) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198472) EVENT_LOG_v1 {"time_micros": 1769094380198467, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198488) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 2761702, prev total WAL file size 2761702, number of live WAL files 2.
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.199422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1163KB)], [183(11MB)]
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380199563, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13179121, "oldest_snapshot_seqno": -1}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 13925 keys, 9876851 bytes, temperature: kUnknown
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380298985, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 9876851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804646, "index_size": 36316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34821, "raw_key_size": 383465, "raw_average_key_size": 27, "raw_value_size": 9571254, "raw_average_value_size": 687, "num_data_blocks": 1296, "num_entries": 13925, "num_filter_entries": 13925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.299452) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9876851 bytes
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.301014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.3 rd, 99.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.3) OK, records in: 14403, records dropped: 478 output_compression: NoCompression
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.301047) EVENT_LOG_v1 {"time_micros": 1769094380301032, "job": 118, "event": "compaction_finished", "compaction_time_micros": 99595, "compaction_time_cpu_micros": 53563, "output_level": 6, "num_output_files": 1, "total_output_size": 9876851, "num_input_records": 14403, "num_output_records": 13925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380301955, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380306437, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.199272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:21 compute-1 ceph-mon[81715]: pgmap v2970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:21 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:21 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:22 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:22 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:22 compute-1 ceph-mon[81715]: pgmap v2971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:22 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:23.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:24 compute-1 sudo[246601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:24 compute-1 sudo[246601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-1 sudo[246601]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-1 sudo[246626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:06:24 compute-1 sudo[246626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-1 sudo[246626]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-1 sudo[246651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:24 compute-1 sudo[246651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-1 sudo[246651]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-1 sudo[246676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:06:24 compute-1 sudo[246676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:24 compute-1 ceph-mon[81715]: pgmap v2972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:24 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:24 compute-1 podman[246773]: 2026-01-22 15:06:24.919276123 +0000 UTC m=+0.065474669 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:06:25 compute-1 podman[246773]: 2026-01-22 15:06:25.012967317 +0000 UTC m=+0.159165773 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 15:06:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:25 compute-1 sudo[246676]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:25 compute-1 sudo[246901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:25 compute-1 sudo[246901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:25 compute-1 sudo[246901]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:25 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:25 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:25 compute-1 sudo[246926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:06:25 compute-1 sudo[246926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:25 compute-1 sudo[246926]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:25 compute-1 sudo[246951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:25 compute-1 sudo[246951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:25 compute-1 sudo[246951]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:25 compute-1 sudo[246976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:06:25 compute-1 sudo[246976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:26 compute-1 sudo[246976]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:26 compute-1 ceph-mon[81715]: pgmap v2973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:26 compute-1 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:06:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:06:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:28 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:28.234 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:06:28 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:28.236 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:06:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:28 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:28 compute-1 ceph-mon[81715]: pgmap v2974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:29 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:30 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:30 compute-1 ceph-mon[81715]: pgmap v2975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:30 compute-1 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:31 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:32 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:32.239 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:06:32 compute-1 ceph-mon[81715]: pgmap v2976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:32 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:33 compute-1 podman[247032]: 2026-01-22 15:06:33.101736929 +0000 UTC m=+0.075465620 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 22 15:06:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:33 compute-1 sudo[247051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:33 compute-1 sudo[247051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:33 compute-1 sudo[247051]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:33 compute-1 sudo[247076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:06:33 compute-1 sudo[247076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:33 compute-1 sudo[247076]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:33 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:34 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:34 compute-1 ceph-mon[81715]: pgmap v2977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:35.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:35 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:35 compute-1 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:37 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:37 compute-1 ceph-mon[81715]: pgmap v2978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:38 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:38 compute-1 ceph-mon[81715]: pgmap v2979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:39 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:40 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:40 compute-1 ceph-mon[81715]: pgmap v2980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:41 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:41 compute-1 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:41 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:42 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:42 compute-1 ceph-mon[81715]: pgmap v2981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:43 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:44 compute-1 ceph-mon[81715]: pgmap v2982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:44 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:45 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:45 compute-1 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:46 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:46 compute-1 ceph-mon[81715]: pgmap v2983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:47.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.505 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.506 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:06:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.506 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:06:47 compute-1 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:48 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:48 compute-1 ceph-mon[81715]: pgmap v2984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:49 compute-1 podman[247101]: 2026-01-22 15:06:49.09713296 +0000 UTC m=+0.086924022 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:06:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:49 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:50 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:50 compute-1 ceph-mon[81715]: pgmap v2985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:50 compute-1 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:52 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:53 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:53 compute-1 ceph-mon[81715]: pgmap v2986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:53.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:53.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:54 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:54 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:54 compute-1 ceph-mon[81715]: pgmap v2987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:55.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:55 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:55 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:55.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:56 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:56 compute-1 ceph-mon[81715]: pgmap v2988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:57.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:57 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:58 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:58 compute-1 ceph-mon[81715]: pgmap v2989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:59 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:06:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:00 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:00 compute-1 ceph-mon[81715]: pgmap v2990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:00 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:01 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:02 compute-1 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:07:02 compute-1 ceph-mon[81715]: pgmap v2991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 15:07:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:03 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:04 compute-1 systemd[1]: Starting dnf makecache...
Jan 22 15:07:04 compute-1 podman[247129]: 2026-01-22 15:07:04.078499885 +0000 UTC m=+0.065431227 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:07:04 compute-1 dnf[247130]: Metadata cache refreshed recently.
Jan 22 15:07:04 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 15:07:04 compute-1 systemd[1]: Finished dnf makecache.
Jan 22 15:07:04 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:04 compute-1 ceph-mon[81715]: pgmap v2992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 15:07:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:05.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:06 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:06 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:07 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:07 compute-1 ceph-mon[81715]: pgmap v2993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:07 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:07.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:08 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:08 compute-1 ceph-mon[81715]: pgmap v2994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:09 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:09.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:10 compute-1 ceph-mon[81715]: pgmap v2995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:10 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:11 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:11 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:11.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:11.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:12 compute-1 ceph-mon[81715]: pgmap v2996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:12 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:13 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 22 15:07:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:13.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 22 15:07:14 compute-1 ceph-mon[81715]: pgmap v2997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 15:07:14 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:15 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:15 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:16 compute-1 ceph-mon[81715]: pgmap v2998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 15:07:16 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:17 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:18 compute-1 ceph-mon[81715]: pgmap v2999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:18 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:19 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:07:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:07:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:20 compute-1 podman[247149]: 2026-01-22 15:07:20.08962091 +0000 UTC m=+0.078222428 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:07:20 compute-1 ceph-mon[81715]: pgmap v3000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:20 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:20 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:21 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:23.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:23 compute-1 ceph-mon[81715]: pgmap v3001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:23 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:24 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:24 compute-1 ceph-mon[81715]: pgmap v3002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:24 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:25.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:25 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:25 compute-1 ceph-mon[81715]: Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 15:07:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 15:07:26 compute-1 ceph-mon[81715]: pgmap v3003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:26 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:27.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:27 compute-1 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:27.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:28 compute-1 ceph-mon[81715]: pgmap v3004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:28 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:29.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:29.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:30 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:31.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:31 compute-1 ceph-mon[81715]: pgmap v3005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:31 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-1 ceph-mon[81715]: Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:31.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:32 compute-1 ceph-mon[81715]: pgmap v3006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:32 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:32 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 15:07:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:33.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:33 compute-1 sudo[247174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:33 compute-1 sudo[247174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-1 sudo[247174]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-1 sudo[247199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:07:33 compute-1 sudo[247199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-1 sudo[247199]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-1 sudo[247224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:33 compute-1 sudo[247224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-1 sudo[247224]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-1 sudo[247249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:07:33 compute-1 sudo[247249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:33 compute-1 ceph-mon[81715]: osdmap e174: 3 total, 3 up, 3 in
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.944897) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453944926, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 306, "total_data_size": 2194411, "memory_usage": 2236688, "flush_reason": "Manual Compaction"}
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453955821, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1441669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90103, "largest_seqno": 91369, "table_properties": {"data_size": 1436443, "index_size": 2429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14759, "raw_average_key_size": 21, "raw_value_size": 1424591, "raw_average_value_size": 2076, "num_data_blocks": 104, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 306, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094381, "oldest_key_time": 1769094381, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 10976 microseconds, and 4734 cpu microseconds.
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.955866) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1441669 bytes OK
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.955890) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957577) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957593) EVENT_LOG_v1 {"time_micros": 1769094453957588, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2187944, prev total WAL file size 2187944, number of live WAL files 2.
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.958568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(1407KB)], [186(9645KB)]
Jan 22 15:07:33 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453958632, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 11318520, "oldest_snapshot_seqno": -1}
Jan 22 15:07:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 13980 keys, 9685693 bytes, temperature: kUnknown
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454021751, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 9685693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9613091, "index_size": 36521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 385356, "raw_average_key_size": 27, "raw_value_size": 9378827, "raw_average_value_size": 670, "num_data_blocks": 1301, "num_entries": 13980, "num_filter_entries": 13980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.021979) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9685693 bytes
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.023049) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.2 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 14611, records dropped: 631 output_compression: NoCompression
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.023065) EVENT_LOG_v1 {"time_micros": 1769094454023057, "job": 120, "event": "compaction_finished", "compaction_time_micros": 63177, "compaction_time_cpu_micros": 30736, "output_level": 6, "num_output_files": 1, "total_output_size": 9685693, "num_input_records": 14611, "num_output_records": 13980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454023348, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454025278, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.958462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-1 sudo[247249]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:35 compute-1 podman[247305]: 2026-01-22 15:07:35.098803843 +0000 UTC m=+0.086574244 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 15:07:35 compute-1 ceph-mon[81715]: pgmap v3008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:35 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:35.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:35.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:36 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:07:36 compute-1 ceph-mon[81715]: pgmap v3009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:36 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:36 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:07:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:07:37 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:38 compute-1 ceph-mon[81715]: pgmap v3010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:38 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:39 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:40 compute-1 ceph-mon[81715]: pgmap v3011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:40 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:40 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:41 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:41 compute-1 sudo[247323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:41 compute-1 sudo[247323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:41 compute-1 sudo[247323]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:41.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:42 compute-1 sudo[247348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:07:42 compute-1 sudo[247348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:42 compute-1 sudo[247348]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:43 compute-1 ceph-mon[81715]: pgmap v3012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:43 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:44 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:44 compute-1 ceph-mon[81715]: pgmap v3013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 MiB/s wr, 17 op/s
Jan 22 15:07:44 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 15:07:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:46 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:46 compute-1 ceph-mon[81715]: osdmap e175: 3 total, 3 up, 3 in
Jan 22 15:07:46 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:47 compute-1 ceph-mon[81715]: pgmap v3015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 15:07:47 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.507 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:07:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:07:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:48 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:48 compute-1 ceph-mon[81715]: pgmap v3016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 15:07:48 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 15:07:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:49 compute-1 ceph-mon[81715]: osdmap e176: 3 total, 3 up, 3 in
Jan 22 15:07:49 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:50 compute-1 ceph-mon[81715]: pgmap v3018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 860 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 32 op/s
Jan 22 15:07:50 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 15:07:51 compute-1 podman[247373]: 2026-01-22 15:07:51.113699129 +0000 UTC m=+0.087686463 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:07:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:52 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:52 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:52 compute-1 ceph-mon[81715]: osdmap e177: 3 total, 3 up, 3 in
Jan 22 15:07:53 compute-1 ceph-mon[81715]: pgmap v3020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.1 MiB/s wr, 66 op/s
Jan 22 15:07:53 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:54 compute-1 ceph-mon[81715]: pgmap v3021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Jan 22 15:07:54 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 15:07:55 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:57 compute-1 ceph-mon[81715]: pgmap v3022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 22 15:07:57 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:57 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:57 compute-1 ceph-mon[81715]: osdmap e178: 3 total, 3 up, 3 in
Jan 22 15:07:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.625 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:07:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.625 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:07:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.626 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:07:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:58 compute-1 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:58 compute-1 ceph-mon[81715]: pgmap v3024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Jan 22 15:07:58 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:07:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:59 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:00.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:01.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:01 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:01 compute-1 ceph-mon[81715]: pgmap v3025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 718 B/s wr, 11 op/s
Jan 22 15:08:01 compute-1 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:03 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-1 ceph-mon[81715]: pgmap v3026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 15:08:03 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:03.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:04.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:04 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:05.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:05 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-1 ceph-mon[81715]: pgmap v3027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 15:08:05 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:06.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:06 compute-1 podman[247400]: 2026-01-22 15:08:06.05053796 +0000 UTC m=+0.043818346 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 22 15:08:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 15:08:07 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:07 compute-1 ceph-mon[81715]: pgmap v3028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 15:08:07 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:07 compute-1 ceph-mon[81715]: osdmap e179: 3 total, 3 up, 3 in
Jan 22 15:08:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:08 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:08 compute-1 ceph-mon[81715]: pgmap v3030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 15:08:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:09.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:09 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:09 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.621360) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490621430, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 841, "num_deletes": 329, "total_data_size": 1268789, "memory_usage": 1293632, "flush_reason": "Manual Compaction"}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490628106, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 822633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91374, "largest_seqno": 92210, "table_properties": {"data_size": 818649, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11369, "raw_average_key_size": 21, "raw_value_size": 809707, "raw_average_value_size": 1502, "num_data_blocks": 65, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 329, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094454, "oldest_key_time": 1769094454, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 6802 microseconds, and 3107 cpu microseconds.
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.628158) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 822633 bytes OK
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.628180) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629459) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629474) EVENT_LOG_v1 {"time_micros": 1769094490629470, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1263977, prev total WAL file size 1263977, number of live WAL files 2.
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.630009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353331' seq:0, type:0; will stop at (end)
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(803KB)], [189(9458KB)]
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490630092, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 10508326, "oldest_snapshot_seqno": -1}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 13846 keys, 10339873 bytes, temperature: kUnknown
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490696720, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 10339873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10267025, "index_size": 37151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34629, "raw_key_size": 383323, "raw_average_key_size": 27, "raw_value_size": 10033735, "raw_average_value_size": 724, "num_data_blocks": 1325, "num_entries": 13846, "num_filter_entries": 13846, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.696969) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10339873 bytes
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.699046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 155.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.3) write-amplify(12.6) OK, records in: 14519, records dropped: 673 output_compression: NoCompression
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.699071) EVENT_LOG_v1 {"time_micros": 1769094490699060, "job": 122, "event": "compaction_finished", "compaction_time_micros": 66680, "compaction_time_cpu_micros": 36484, "output_level": 6, "num_output_files": 1, "total_output_size": 10339873, "num_input_records": 14519, "num_output_records": 13846, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490699342, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490701058, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:11 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:11 compute-1 ceph-mon[81715]: pgmap v3031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Jan 22 15:08:11 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:11.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:12 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:13.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:13 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:13 compute-1 ceph-mon[81715]: pgmap v3032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:13 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:15 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:15 compute-1 ceph-mon[81715]: pgmap v3033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 15:08:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 15:08:16 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:16 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:17.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:18.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:18 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:18 compute-1 ceph-mon[81715]: pgmap v3034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:18 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:08:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:08:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:08:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:08:19 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:19 compute-1 ceph-mon[81715]: pgmap v3035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:08:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:08:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:19.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:20 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:20 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:20 compute-1 ceph-mon[81715]: pgmap v3036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:21.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:22.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:22 compute-1 podman[247420]: 2026-01-22 15:08:22.102419258 +0000 UTC m=+0.085103494 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:08:22 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:22 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:23.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:23 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:23 compute-1 ceph-mon[81715]: pgmap v3037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:23 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:24.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:25 compute-1 ceph-mon[81715]: pgmap v3038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:25 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:25.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:27 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:27 compute-1 ceph-mon[81715]: pgmap v3039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:27 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:29 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-1 ceph-mon[81715]: pgmap v3040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:29 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:30.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:30 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:30 compute-1 ceph-mon[81715]: pgmap v3041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:30 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:32.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:32 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:32 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:33 compute-1 ceph-mon[81715]: pgmap v3042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:33 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:34 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:34 compute-1 ceph-mon[81715]: pgmap v3043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:34 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:35 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:36 compute-1 ceph-mon[81715]: pgmap v3044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:36 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:36 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:37 compute-1 podman[247447]: 2026-01-22 15:08:37.076707003 +0000 UTC m=+0.070117131 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 15:08:37 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:38 compute-1 ceph-mon[81715]: pgmap v3045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:38 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:40.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:40 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:42 compute-1 ceph-mon[81715]: pgmap v3046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:42 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:42 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:42 compute-1 sudo[247466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:42 compute-1 sudo[247466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-1 sudo[247466]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-1 sudo[247491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:08:42 compute-1 sudo[247491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-1 sudo[247491]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-1 sudo[247516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:42 compute-1 sudo[247516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-1 sudo[247516]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-1 sudo[247541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:08:42 compute-1 sudo[247541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-1 sudo[247541]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:43 compute-1 ceph-mon[81715]: pgmap v3047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:43 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:43 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:44 compute-1 sudo[247587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:44 compute-1 sudo[247587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:44 compute-1 sudo[247587]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:44 compute-1 sudo[247612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:08:44 compute-1 sudo[247612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:44 compute-1 sudo[247612]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:44 compute-1 sudo[247637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:44 compute-1 sudo[247637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:44 compute-1 sudo[247637]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:44 compute-1 sudo[247662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:08:44 compute-1 sudo[247662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:44 compute-1 sudo[247662]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:44 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:44 compute-1 ceph-mon[81715]: pgmap v3048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:44 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:45 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:08:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:08:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:47 compute-1 ceph-mon[81715]: pgmap v3049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:47 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.509 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:08:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.509 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:08:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:48 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:48 compute-1 ceph-mon[81715]: pgmap v3050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:48 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:49 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:50.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:51 compute-1 ceph-mon[81715]: pgmap v3051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:51 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:51 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:52.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:52 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:52 compute-1 ceph-mon[81715]: pgmap v3052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:52 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:53 compute-1 podman[247720]: 2026-01-22 15:08:53.121399877 +0000 UTC m=+0.103675101 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 15:08:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:53 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:54.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:54 compute-1 ceph-mon[81715]: pgmap v3053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:54 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:56 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:56 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:57 compute-1 ceph-mon[81715]: pgmap v3054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:57 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:57 compute-1 sudo[247746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:57 compute-1 sudo[247746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:57 compute-1 sudo[247746]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:57 compute-1 sudo[247771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:08:57 compute-1 sudo[247771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:57 compute-1 sudo[247771]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:08:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:08:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:58 compute-1 ceph-mon[81715]: pgmap v3055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:58 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:58 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:08:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:00.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:00 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-1 ceph-mon[81715]: pgmap v3056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:01 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:01 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:02.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:02 compute-1 ceph-mon[81715]: pgmap v3057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:02 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:03 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:03.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:04.104 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:09:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:04.105 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:09:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:04 compute-1 ceph-mon[81715]: pgmap v3058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:04 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:05.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:06 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:06 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:07 compute-1 ceph-mon[81715]: pgmap v3059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:07 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:07.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:08 compute-1 podman[247796]: 2026-01-22 15:09:08.047455078 +0000 UTC m=+0.042611683 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:09:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:08 compute-1 ceph-mon[81715]: pgmap v3060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:08 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:09.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:10 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:11.107 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:09:11 compute-1 ceph-mon[81715]: pgmap v3061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:11 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:11 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:11.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:12 compute-1 ceph-mon[81715]: pgmap v3062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:12 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:13 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:13.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:14 compute-1 ceph-mon[81715]: pgmap v3063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:14 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:15 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:15 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:15.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:16 compute-1 ceph-mon[81715]: pgmap v3064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:16 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:17 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:18 compute-1 ceph-mon[81715]: pgmap v3065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:18 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:09:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:09:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:19 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:20 compute-1 ceph-mon[81715]: pgmap v3066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:20 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:20 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:22 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:23 compute-1 ceph-mon[81715]: pgmap v3067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:23 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:24 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:24 compute-1 ceph-mon[81715]: pgmap v3068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:24 compute-1 podman[247815]: 2026-01-22 15:09:24.133597234 +0000 UTC m=+0.111539222 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:09:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:25 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:27 compute-1 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:27 compute-1 ceph-mon[81715]: pgmap v3069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:27 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:27 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:27.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:28 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:28 compute-1 ceph-mon[81715]: pgmap v3070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:28 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:29 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:31 compute-1 ceph-mon[81715]: pgmap v3071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:31 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:31 compute-1 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:31.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:32 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:33 compute-1 ceph-mon[81715]: pgmap v3072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:33 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:33.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:34 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:34 compute-1 ceph-mon[81715]: pgmap v3073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:35 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:36.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:36 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:36 compute-1 ceph-mon[81715]: pgmap v3074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:36 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:37 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:38 compute-1 ceph-mon[81715]: pgmap v3075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:38 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:39 compute-1 podman[247842]: 2026-01-22 15:09:39.056452661 +0000 UTC m=+0.052325985 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 15:09:39 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:40 compute-1 ceph-mon[81715]: pgmap v3076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:40 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:41 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:41 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:41.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:42.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:42 compute-1 ceph-mon[81715]: pgmap v3077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:42 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:43 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:44 compute-1 ceph-mon[81715]: pgmap v3078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:44 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:45 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:45 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:45.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:46 compute-1 ceph-mon[81715]: pgmap v3079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:46 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:09:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:09:47 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:48 compute-1 ceph-mon[81715]: pgmap v3080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:48 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:49 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:50 compute-1 ceph-mon[81715]: pgmap v3081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:50 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:50 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:51 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:51.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:52 compute-1 ceph-mon[81715]: pgmap v3082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:52 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:53.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:54 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:55 compute-1 ceph-mon[81715]: pgmap v3083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:55 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:55 compute-1 podman[247862]: 2026-01-22 15:09:55.171409349 +0000 UTC m=+0.161140601 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:09:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:56 compute-1 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:56 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:57 compute-1 ceph-mon[81715]: pgmap v3084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:57 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:57.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:58 compute-1 sudo[247888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:58 compute-1 sudo[247888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-1 sudo[247888]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:58 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:58 compute-1 ceph-mon[81715]: pgmap v3085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:58 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:58 compute-1 sudo[247913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:09:58 compute-1 sudo[247913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-1 sudo[247913]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-1 sudo[247938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:58 compute-1 sudo[247938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-1 sudo[247938]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-1 sudo[247963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:09:58 compute-1 sudo[247963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:58 compute-1 sudo[247963]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:59 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:09:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:09:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:09:59 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:09:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:09:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:00.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:00 compute-1 ceph-mon[81715]: pgmap v3086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:00 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 15:10:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 15:10:00 compute-1 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:01.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:02.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:03 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-1 ceph-mon[81715]: pgmap v3087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:03 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:10:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:10:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:04 compute-1 ceph-mon[81715]: pgmap v3088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:04 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:05.009 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:10:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:05.010 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:10:05 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:06.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:06 compute-1 ceph-mon[81715]: pgmap v3089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:06 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:06 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:07 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:08.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:08 compute-1 ceph-mon[81715]: pgmap v3090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:08 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:08 compute-1 sudo[248018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:08 compute-1 sudo[248018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:09 compute-1 sudo[248018]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:09 compute-1 sudo[248043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:10:09 compute-1 sudo[248043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:09 compute-1 sudo[248043]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:09 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:10 compute-1 podman[248068]: 2026-01-22 15:10:10.135172831 +0000 UTC m=+0.116105265 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 15:10:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:10 compute-1 ceph-mon[81715]: pgmap v3091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:10 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:10 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:11.012 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:10:11 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:12 compute-1 ceph-mon[81715]: pgmap v3092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:12 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:13.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:14.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:14 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-1 ceph-mon[81715]: pgmap v3093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:15 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:16 compute-1 ceph-mon[81715]: pgmap v3094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:16 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:16 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:18 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:19 compute-1 ceph-mon[81715]: pgmap v3095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:19 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:19 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:10:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:10:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:20.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:20 compute-1 ceph-mon[81715]: pgmap v3096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:20 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:22 compute-1 ceph-mon[81715]: pgmap v3097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:22 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:22 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:23 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:24.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:25 compute-1 ceph-mon[81715]: pgmap v3098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:25 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:26 compute-1 podman[248089]: 2026-01-22 15:10:26.115582902 +0000 UTC m=+0.094597118 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:10:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:26 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:26 compute-1 ceph-mon[81715]: pgmap v3099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:26 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:10:27 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.382934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627382971, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2234, "num_deletes": 487, "total_data_size": 4150784, "memory_usage": 4221232, "flush_reason": "Manual Compaction"}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627398018, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 2692767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92215, "largest_seqno": 94444, "table_properties": {"data_size": 2684212, "index_size": 4536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26787, "raw_average_key_size": 22, "raw_value_size": 2663684, "raw_average_value_size": 2282, "num_data_blocks": 194, "num_entries": 1167, "num_filter_entries": 1167, "num_deletions": 487, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094490, "oldest_key_time": 1769094490, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 15124 microseconds, and 6112 cpu microseconds.
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398058) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 2692767 bytes OK
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398073) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399180) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399191) EVENT_LOG_v1 {"time_micros": 1769094627399188, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 4139543, prev total WAL file size 4139543, number of live WAL files 2.
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(2629KB)], [192(10097KB)]
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627400188, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 13032640, "oldest_snapshot_seqno": -1}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 14022 keys, 11311499 bytes, temperature: kUnknown
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627472487, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 11311499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11236308, "index_size": 39046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386744, "raw_average_key_size": 27, "raw_value_size": 10998669, "raw_average_value_size": 784, "num_data_blocks": 1405, "num_entries": 14022, "num_filter_entries": 14022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.472916) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11311499 bytes
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.474288) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 156.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 15013, records dropped: 991 output_compression: NoCompression
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.474330) EVENT_LOG_v1 {"time_micros": 1769094627474304, "job": 124, "event": "compaction_finished", "compaction_time_micros": 72412, "compaction_time_cpu_micros": 32175, "output_level": 6, "num_output_files": 1, "total_output_size": 11311499, "num_input_records": 15013, "num_output_records": 14022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627475428, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627478830, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:28 compute-1 ceph-mon[81715]: pgmap v3100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:28 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:29 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:30 compute-1 ceph-mon[81715]: pgmap v3101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:30 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:31 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:31 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:32 compute-1 ceph-mon[81715]: pgmap v3102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:32 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:33 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:34.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:35 compute-1 ceph-mon[81715]: pgmap v3103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:35 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:36 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:36 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:36.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:37 compute-1 ceph-mon[81715]: pgmap v3104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:37 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:38 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-1 ceph-mon[81715]: pgmap v3105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:38 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:39 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:40.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:40 compute-1 ceph-mon[81715]: pgmap v3106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:40 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:41 compute-1 podman[248116]: 2026-01-22 15:10:41.107528441 +0000 UTC m=+0.086725416 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 22 15:10:41 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:41 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:42.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:42 compute-1 ceph-mon[81715]: pgmap v3107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:42 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:44 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:46 compute-1 ceph-mon[81715]: pgmap v3108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:46 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:46 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:10:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:10:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:48 compute-1 ceph-mon[81715]: pgmap v3109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:48 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:48 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:48 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:49 compute-1 ceph-mon[81715]: pgmap v3110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:49 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:50.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:50 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:50 compute-1 ceph-mon[81715]: pgmap v3111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:50.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:51 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:52.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:52 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:52 compute-1 ceph-mon[81715]: pgmap v3112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:52 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:52.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:53 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:53 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:54.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:54 compute-1 ceph-mon[81715]: pgmap v3113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:54 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:55 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:56 compute-1 ceph-mon[81715]: pgmap v3114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:56 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:57 compute-1 podman[248137]: 2026-01-22 15:10:57.119309234 +0000 UTC m=+0.105091679 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 15:10:57 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:57 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:58.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:58 compute-1 ceph-mon[81715]: pgmap v3115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:58 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:10:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:58.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:59 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:00.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:00.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:00 compute-1 ceph-mon[81715]: pgmap v3116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:00 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:01 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:03 compute-1 ceph-mon[81715]: pgmap v3117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:04.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:04 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:04 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:04 compute-1 ceph-mon[81715]: pgmap v3118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:04 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.832 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:11:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.833 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:11:05 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.833 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:11:05 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:07 compute-1 ceph-mon[81715]: pgmap v3119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:07 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:07 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:08.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:08 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:08 compute-1 ceph-mon[81715]: pgmap v3120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:08 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:09 compute-1 sudo[248162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:09 compute-1 sudo[248162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-1 sudo[248162]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-1 sudo[248187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:11:09 compute-1 sudo[248187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-1 sudo[248187]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-1 sudo[248212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:09 compute-1 sudo[248212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-1 sudo[248212]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-1 sudo[248237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:11:09 compute-1 sudo[248237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:10 compute-1 sudo[248237]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:10.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:10 compute-1 ceph-mon[81715]: pgmap v3121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:10 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:11:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:11:11 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:12 compute-1 podman[248291]: 2026-01-22 15:11:12.056475872 +0000 UTC m=+0.048987324 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:11:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:12.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:12 compute-1 ceph-mon[81715]: pgmap v3122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:12 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:12 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:14 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:14.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:14.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:15 compute-1 ceph-mon[81715]: pgmap v3123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:15 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:16.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:16 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-1 ceph-mon[81715]: pgmap v3124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:16 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:16.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:16 compute-1 sudo[248311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:16 compute-1 sudo[248311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:16 compute-1 sudo[248311]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:16 compute-1 sudo[248336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:11:16 compute-1 sudo[248336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:16 compute-1 sudo[248336]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:17 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:17 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:18.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:11:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:11:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:11:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:11:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:18 compute-1 ceph-mon[81715]: pgmap v3125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:18 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:11:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:11:19 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:20.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:20.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:20 compute-1 ceph-mon[81715]: pgmap v3126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:20 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:21 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:22.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:23 compute-1 ceph-mon[81715]: pgmap v3127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:23 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:23 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:24 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:11:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:24 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:11:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:24 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:24 compute-1 ceph-mon[81715]: pgmap v3128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:24 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:26.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:27 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:27 compute-1 ceph-mon[81715]: pgmap v3129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:27 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:28 compute-1 podman[248362]: 2026-01-22 15:11:28.105047002 +0000 UTC m=+0.083988543 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:11:28 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:28 compute-1 ceph-mon[81715]: pgmap v3130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:28 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:29 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:30 compute-1 ceph-mon[81715]: pgmap v3131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:30 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:32 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:32 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:33 compute-1 ceph-mon[81715]: pgmap v3132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:33 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:33 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:34.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:34 compute-1 ceph-mon[81715]: pgmap v3133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:34 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:35 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:36 compute-1 ceph-mon[81715]: pgmap v3134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:36 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:37 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:37 compute-1 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:38 compute-1 ceph-mon[81715]: pgmap v3135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:38 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:40 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:41 compute-1 ceph-mon[81715]: pgmap v3136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:41 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:41 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:43 compute-1 ceph-mon[81715]: pgmap v3137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:43 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:43 compute-1 podman[248389]: 2026-01-22 15:11:43.082866312 +0000 UTC m=+0.060933675 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 15:11:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:44 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:45 compute-1 ceph-mon[81715]: pgmap v3138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:45 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:46 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:47 compute-1 ceph-mon[81715]: pgmap v3139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:47 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:47 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.512 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.512 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:11:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:11:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:48.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:48 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:48 compute-1 ceph-mon[81715]: pgmap v3140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:49 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:49 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:50.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:50 compute-1 ceph-mon[81715]: pgmap v3141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:50 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:51 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:52 compute-1 ceph-mon[81715]: pgmap v3142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:52 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:53 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:53 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:54.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:54 compute-1 ceph-mon[81715]: pgmap v3143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:54 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:55.455 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:11:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:11:55.456 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:11:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:56.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:57 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:11:57 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:11:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:58 compute-1 ceph-mon[81715]: pgmap v3144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 15:11:58 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:58 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:58 compute-1 ceph-mon[81715]: pgmap v3145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 15:11:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:11:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:59 compute-1 podman[248409]: 2026-01-22 15:11:59.097966436 +0000 UTC m=+0.083507480 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:11:59 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:59 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:00 compute-1 ceph-mon[81715]: pgmap v3146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:00 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:02 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-1 ceph-mon[81715]: pgmap v3147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:02 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:03 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:03.458 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:12:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:05 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:06 compute-1 ceph-mon[81715]: pgmap v3148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:06 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:06 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:07 compute-1 ceph-mon[81715]: pgmap v3149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:07 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:07 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:09 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:09 compute-1 ceph-mon[81715]: pgmap v3150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 15:12:09 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:10 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:11 compute-1 ceph-mon[81715]: pgmap v3151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 15:12:11 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:11 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:12.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:13 compute-1 ceph-mon[81715]: pgmap v3152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:13 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:13 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:14 compute-1 podman[248433]: 2026-01-22 15:12:14.092212635 +0000 UTC m=+0.076643446 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 15:12:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:14 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:14 compute-1 ceph-mon[81715]: pgmap v3153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:14 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:16.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:16.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:16 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:16 compute-1 ceph-mon[81715]: pgmap v3154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:16 compute-1 sudo[248452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:16 compute-1 sudo[248452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:16 compute-1 sudo[248452]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-1 sudo[248477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:12:17 compute-1 sudo[248477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-1 sudo[248477]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-1 sudo[248502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:17 compute-1 sudo[248502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-1 sudo[248502]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-1 sudo[248527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:12:17 compute-1 sudo[248527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-1 sudo[248527]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:17 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:17 compute-1 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:18.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:12:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:12:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:12:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:12:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:19 compute-1 ceph-mon[81715]: pgmap v3155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:19 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:12:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:12:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:20.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:20 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:22.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:22.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:23 compute-1 ceph-mon[81715]: pgmap v3156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:23 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:23 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:24.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:24 compute-1 ceph-mon[81715]: pgmap v3157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:24 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-1 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:25 compute-1 ceph-mon[81715]: pgmap v3158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:25 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:25 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:27 compute-1 ceph-mon[81715]: pgmap v3159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:27 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:28 compute-1 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:28 compute-1 ceph-mon[81715]: pgmap v3160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:28 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:29 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:30 compute-1 podman[248584]: 2026-01-22 15:12:30.098433311 +0000 UTC m=+0.094211167 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:12:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:32 compute-1 ceph-mon[81715]: pgmap v3161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:32 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:32.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:33 compute-1 ceph-mon[81715]: pgmap v3162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:33 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:33 compute-1 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:33 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:33 compute-1 sudo[248610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:33 compute-1 sudo[248610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:33 compute-1 sudo[248610]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:33 compute-1 sudo[248635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:12:33 compute-1 sudo[248635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:33 compute-1 sudo[248635]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:34 compute-1 ceph-mon[81715]: pgmap v3163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:34 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:35 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:37 compute-1 ceph-mon[81715]: pgmap v3164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:37 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:37 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:38 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:38 compute-1 ceph-mon[81715]: pgmap v3165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:40 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:40 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:41 compute-1 ceph-mon[81715]: pgmap v3166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:41 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:41 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:42.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:42 compute-1 ceph-mon[81715]: pgmap v3167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:42 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:43 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:43 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:45 compute-1 ceph-mon[81715]: pgmap v3168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:45 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:45 compute-1 podman[248660]: 2026-01-22 15:12:45.059433128 +0000 UTC m=+0.050160696 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 15:12:46 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:47 compute-1 ceph-mon[81715]: pgmap v3169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:47 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:12:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:12:48 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:49 compute-1 ceph-mon[81715]: pgmap v3170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:49 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:50 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:51 compute-1 ceph-mon[81715]: pgmap v3171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:51 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:53 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:53 compute-1 ceph-mon[81715]: pgmap v3172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:53 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:54 compute-1 ceph-mon[81715]: pgmap v3173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:55 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:56.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:56 compute-1 ceph-mon[81715]: pgmap v3174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:56 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:57.172 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:12:57 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:12:57.174 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:12:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:12:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:58.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:58 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:58 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:59 compute-1 ceph-mon[81715]: pgmap v3175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:59 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:12:59 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:00.176 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:13:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:00.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:00 compute-1 ceph-mon[81715]: pgmap v3176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:01 compute-1 podman[248679]: 2026-01-22 15:13:01.104586217 +0000 UTC m=+0.095502272 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 15:13:01 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:02.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:02 compute-1 ceph-mon[81715]: pgmap v3177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:02 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:02 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:03 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:05 compute-1 ceph-mon[81715]: pgmap v3178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:05 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:06 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:07 compute-1 ceph-mon[81715]: pgmap v3179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:07 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:07 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:08 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:08 compute-1 ceph-mon[81715]: pgmap v3180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:09 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:09 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:10.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:10.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:10 compute-1 ceph-mon[81715]: pgmap v3181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:10 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:12.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:12 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:13 compute-1 ceph-mon[81715]: pgmap v3182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:13 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:13 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:14.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:14 compute-1 ceph-mon[81715]: pgmap v3183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:14 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:15 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:16 compute-1 podman[248706]: 2026-01-22 15:13:16.096047662 +0000 UTC m=+0.088458893 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 15:13:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:17 compute-1 ceph-mon[81715]: pgmap v3184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:17 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:17 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:18 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:18.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:19 compute-1 ceph-mon[81715]: pgmap v3185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:19 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:13:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:13:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:20 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-1 ceph-mon[81715]: pgmap v3186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:22 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.100442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802100595, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 2595, "num_deletes": 544, "total_data_size": 4779188, "memory_usage": 4862240, "flush_reason": "Manual Compaction"}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802124075, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 3125853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94449, "largest_seqno": 97039, "table_properties": {"data_size": 3116184, "index_size": 5202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30539, "raw_average_key_size": 22, "raw_value_size": 3092778, "raw_average_value_size": 2316, "num_data_blocks": 224, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 544, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094628, "oldest_key_time": 1769094628, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 23622 microseconds, and 10811 cpu microseconds.
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.124143) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 3125853 bytes OK
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.124173) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125757) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125771) EVENT_LOG_v1 {"time_micros": 1769094802125766, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125795) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4766263, prev total WAL file size 4766263, number of live WAL files 2.
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.127193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353330' seq:72057594037927935, type:22 .. '6C6F676D0034373834' seq:0, type:0; will stop at (end)
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(3052KB)], [195(10MB)]
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802127317, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14437352, "oldest_snapshot_seqno": -1}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 14256 keys, 14224617 bytes, temperature: kUnknown
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802196842, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 14224617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14144748, "index_size": 43148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 391448, "raw_average_key_size": 27, "raw_value_size": 13900208, "raw_average_value_size": 975, "num_data_blocks": 1579, "num_entries": 14256, "num_filter_entries": 14256, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.197128) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 14224617 bytes
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.198906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.5 rd, 204.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 15357, records dropped: 1101 output_compression: NoCompression
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.198936) EVENT_LOG_v1 {"time_micros": 1769094802198922, "job": 126, "event": "compaction_finished", "compaction_time_micros": 69567, "compaction_time_cpu_micros": 33493, "output_level": 6, "num_output_files": 1, "total_output_size": 14224617, "num_input_records": 15357, "num_output_records": 14256, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802200036, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802203695, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.126991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:23 compute-1 ceph-mon[81715]: pgmap v3187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:23 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:23 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:24 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:24 compute-1 ceph-mon[81715]: pgmap v3188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:25 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:25 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:26 compute-1 ceph-mon[81715]: pgmap v3189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:26 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:27 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:27 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:28.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:28 compute-1 ceph-mon[81715]: pgmap v3190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:28 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:29 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:30.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:30 compute-1 ceph-mon[81715]: pgmap v3191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:30 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:31 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:32 compute-1 podman[248728]: 2026-01-22 15:13:32.094384184 +0000 UTC m=+0.083115109 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:13:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:32.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:32 compute-1 ceph-mon[81715]: pgmap v3192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:32 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:32 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:33 compute-1 sudo[248754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:33 compute-1 sudo[248754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-1 sudo[248754]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:33 compute-1 sudo[248779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:13:33 compute-1 sudo[248779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-1 sudo[248779]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-1 sudo[248804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:33 compute-1 sudo[248804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-1 sudo[248804]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-1 sudo[248829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:13:33 compute-1 sudo[248829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:34 compute-1 sudo[248829]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:34.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.670187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814670221, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 435, "num_deletes": 274, "total_data_size": 343312, "memory_usage": 352712, "flush_reason": "Manual Compaction"}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814673621, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97044, "largest_seqno": 97474, "table_properties": {"data_size": 222373, "index_size": 389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6751, "raw_average_key_size": 19, "raw_value_size": 217350, "raw_average_value_size": 635, "num_data_blocks": 17, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094803, "oldest_key_time": 1769094803, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 3494 microseconds, and 1248 cpu microseconds.
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.673679) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 224727 bytes OK
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.673696) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674755) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674769) EVENT_LOG_v1 {"time_micros": 1769094814674764, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674784) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 340461, prev total WAL file size 340461, number of live WAL files 2.
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(219KB)], [198(13MB)]
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814675106, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14449344, "oldest_snapshot_seqno": -1}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 14040 keys, 12781736 bytes, temperature: kUnknown
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814753027, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 12781736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12704229, "index_size": 41298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 387535, "raw_average_key_size": 27, "raw_value_size": 12464090, "raw_average_value_size": 887, "num_data_blocks": 1497, "num_entries": 14040, "num_filter_entries": 14040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.753400) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12781736 bytes
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.754862) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.0 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(121.2) write-amplify(56.9) OK, records in: 14598, records dropped: 558 output_compression: NoCompression
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.754878) EVENT_LOG_v1 {"time_micros": 1769094814754870, "job": 128, "event": "compaction_finished", "compaction_time_micros": 78101, "compaction_time_cpu_micros": 45587, "output_level": 6, "num_output_files": 1, "total_output_size": 12781736, "num_input_records": 14598, "num_output_records": 14040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814755338, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814758038, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-1 ceph-mon[81715]: pgmap v3193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:34 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:13:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:13:35 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:36 compute-1 ceph-mon[81715]: pgmap v3194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:36 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:38 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:38 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:39 compute-1 ceph-mon[81715]: pgmap v3195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:39 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:40 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:40 compute-1 ceph-mon[81715]: pgmap v3196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:41 compute-1 sudo[248885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:41 compute-1 sudo[248885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:41 compute-1 sudo[248885]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:41 compute-1 sudo[248910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:13:41 compute-1 sudo[248910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:41 compute-1 sudo[248910]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:41 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:41 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:41 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:42 compute-1 ceph-mon[81715]: pgmap v3197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:42 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:43 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:43 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:44 compute-1 ceph-mon[81715]: pgmap v3198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:44 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:45 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:46 compute-1 ceph-mon[81715]: pgmap v3199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:46 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:47 compute-1 podman[248935]: 2026-01-22 15:13:47.052359492 +0000 UTC m=+0.046208650 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 15:13:47 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:13:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:13:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:48 compute-1 ceph-mon[81715]: pgmap v3200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:48 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:49 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:50.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:50 compute-1 ceph-mon[81715]: pgmap v3201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:50 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:51 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:52 compute-1 ceph-mon[81715]: pgmap v3202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:52 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:52 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:53 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:54.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:54 compute-1 ceph-mon[81715]: pgmap v3203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:54 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:56 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:56.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:58 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:58.307 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:13:58 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:13:58.308 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:13:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:13:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:59 compute-1 ceph-mon[81715]: pgmap v3204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-1 ceph-mon[81715]: pgmap v3205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-1 ceph-mon[81715]: pgmap v3206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:00 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:01 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:02.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:02 compute-1 ceph-mon[81715]: pgmap v3207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:02 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:02 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:02 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:03 compute-1 podman[248954]: 2026-01-22 15:14:03.143324708 +0000 UTC m=+0.125964259 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 15:14:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:03 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:04 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:14:04.311 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:14:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:05 compute-1 ceph-mon[81715]: pgmap v3208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:05 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:06 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:07 compute-1 ceph-mon[81715]: pgmap v3209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 15:14:07 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:07 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:08 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:08.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:09 compute-1 ceph-mon[81715]: pgmap v3210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:14:09 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:11 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:11 compute-1 ceph-mon[81715]: pgmap v3211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:11 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:12 compute-1 ceph-mon[81715]: pgmap v3212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 15:14:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:13 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:14.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:14 compute-1 ceph-mon[81715]: pgmap v3213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 15:14:14 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:16.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:16 compute-1 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:17 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:17 compute-1 ceph-mon[81715]: pgmap v3214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 KiB/s wr, 36 op/s
Jan 22 15:14:18 compute-1 podman[248981]: 2026-01-22 15:14:18.063015629 +0000 UTC m=+0.055944801 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:14:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:18.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:19 compute-1 ceph-mon[81715]: pgmap v3215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 21 op/s
Jan 22 15:14:19 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:20 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:20.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:21 compute-1 ceph-mon[81715]: pgmap v3216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 864 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 997 KiB/s wr, 35 op/s
Jan 22 15:14:21 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-1 ceph-mon[81715]: pgmap v3217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 22 15:14:22 compute-1 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:23 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:24 compute-1 ceph-mon[81715]: pgmap v3218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 15:14:24 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:14:24 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:14:24 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:26 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:26 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:27 compute-1 ceph-mon[81715]: pgmap v3219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Jan 22 15:14:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:27 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:27 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:30 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:30 compute-1 ceph-mon[81715]: pgmap v3220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 22 15:14:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-1 ceph-mon[81715]: pgmap v3221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 15:14:31 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:33 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:33 compute-1 ceph-mon[81715]: pgmap v3222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 795 KiB/s wr, 17 op/s
Jan 22 15:14:33 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:34 compute-1 podman[249001]: 2026-01-22 15:14:34.113444708 +0000 UTC m=+0.098733808 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:14:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:34.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:34.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:34 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:14:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 15K writes, 47K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 15K writes, 5211 syncs, 2.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 872 writes, 1903 keys, 872 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s
                                           Interval WAL: 872 writes, 408 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 15:14:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:35 compute-1 ceph-mon[81715]: pgmap v3223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:35 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:36 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:36 compute-1 ceph-mon[81715]: pgmap v3224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:37 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:38 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:38 compute-1 ceph-mon[81715]: pgmap v3225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 596 B/s wr, 4 op/s
Jan 22 15:14:40 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:41 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:41 compute-1 ceph-mon[81715]: pgmap v3226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 22 15:14:41 compute-1 sudo[249027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:41 compute-1 sudo[249027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-1 sudo[249027]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-1 sudo[249052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:14:41 compute-1 sudo[249052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-1 sudo[249052]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-1 sudo[249077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:41 compute-1 sudo[249077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-1 sudo[249077]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-1 sudo[249102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:14:41 compute-1 sudo[249102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-1 sudo[249102]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:42.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:42 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:42 compute-1 ceph-mon[81715]: pgmap v3227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:42 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:44.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:14:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:14:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:45 compute-1 ceph-mon[81715]: pgmap v3228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:45 compute-1 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:46.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:46.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:46 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:46 compute-1 ceph-mon[81715]: pgmap v3229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.200454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887200494, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1229, "num_deletes": 369, "total_data_size": 1952493, "memory_usage": 1977216, "flush_reason": "Manual Compaction"}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887208697, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 847476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97479, "largest_seqno": 98703, "table_properties": {"data_size": 843078, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15841, "raw_average_key_size": 23, "raw_value_size": 832092, "raw_average_value_size": 1209, "num_data_blocks": 69, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 369, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094814, "oldest_key_time": 1769094814, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 8321 microseconds, and 3573 cpu microseconds.
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.208765) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 847476 bytes OK
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.208789) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209780) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209793) EVENT_LOG_v1 {"time_micros": 1769094887209789, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209810) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1945909, prev total WAL file size 1945909, number of live WAL files 2.
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.210498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(827KB)], [201(12MB)]
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887210557, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 13629212, "oldest_snapshot_seqno": -1}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 14005 keys, 10132259 bytes, temperature: kUnknown
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887288216, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 10132259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10058756, "index_size": 37358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386606, "raw_average_key_size": 27, "raw_value_size": 9823222, "raw_average_value_size": 701, "num_data_blocks": 1335, "num_entries": 14005, "num_filter_entries": 14005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.288476) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10132259 bytes
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.290340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(28.0) write-amplify(12.0) OK, records in: 14728, records dropped: 723 output_compression: NoCompression
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.290356) EVENT_LOG_v1 {"time_micros": 1769094887290348, "job": 130, "event": "compaction_finished", "compaction_time_micros": 77719, "compaction_time_cpu_micros": 50705, "output_level": 6, "num_output_files": 1, "total_output_size": 10132259, "num_input_records": 14728, "num_output_records": 14005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887290577, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887292611, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.210399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:14:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:14:47 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:47 compute-1 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:48.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:48.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:48 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:48 compute-1 ceph-mon[81715]: pgmap v3230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:49 compute-1 podman[249158]: 2026-01-22 15:14:49.052353735 +0000 UTC m=+0.042100870 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 15:14:49 compute-1 sudo[249178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:49 compute-1 sudo[249178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:49 compute-1 sudo[249178]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:49 compute-1 sudo[249203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:14:49 compute-1 sudo[249203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:49 compute-1 sudo[249203]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:50 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:51 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:51 compute-1 ceph-mon[81715]: pgmap v3231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:51 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:53 compute-1 ceph-mon[81715]: pgmap v3232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:54 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-1 ceph-mon[81715]: pgmap v3233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:55 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:56.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:57 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:57 compute-1 ceph-mon[81715]: pgmap v3234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:57 compute-1 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:58 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:58 compute-1 ceph-mon[81715]: pgmap v3235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:14:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:59 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:59 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:01 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:01 compute-1 ceph-mon[81715]: pgmap v3236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:02 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:03 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:03 compute-1 ceph-mon[81715]: pgmap v3237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:03 compute-1 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:04 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-1 ceph-mon[81715]: pgmap v3238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:05 compute-1 podman[249228]: 2026-01-22 15:15:05.09659957 +0000 UTC m=+0.092329307 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:15:05 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:06 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:06.538 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:15:06 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:06.539 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:15:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:06.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:06 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:06 compute-1 ceph-mon[81715]: pgmap v3239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:07 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:08 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:08 compute-1 ceph-mon[81715]: pgmap v3240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:09 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:10.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:10 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:10 compute-1 ceph-mon[81715]: pgmap v3241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:11 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:12.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:12 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:12 compute-1 ceph-mon[81715]: pgmap v3242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:12 compute-1 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:13 compute-1 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:14 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:14.540 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:15:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:15 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:15 compute-1 ceph-mon[81715]: pgmap v3243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-1 ceph-mon[81715]: pgmap v3244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:16.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:17 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:17 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:18 compute-1 ceph-mon[81715]: pgmap v3245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:18 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:18.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:15:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:15:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:15:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:15:19 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:15:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:15:20 compute-1 podman[249256]: 2026-01-22 15:15:20.051487755 +0000 UTC m=+0.044029431 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:15:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:21 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:21 compute-1 ceph-mon[81715]: pgmap v3246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:22.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:23 compute-1 ceph-mon[81715]: pgmap v3247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-1 ceph-mon[81715]: pgmap v3248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:26 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:26.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:27 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:27 compute-1 ceph-mon[81715]: pgmap v3249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:28 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-1 ceph-mon[81715]: pgmap v3250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:29 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:30.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:31 compute-1 ceph-mon[81715]: pgmap v3251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:15:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 99K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1740 writes, 10K keys, 1740 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s
                                           Interval WAL: 1740 writes, 1740 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     74.2      1.44              0.35        65    0.022       0      0       0.0       0.0
                                             L6      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.8    145.4    125.6      4.91              1.87        64    0.077    653K    35K       0.0       0.0
                                            Sum      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.8    112.5    114.0      6.35              2.22       129    0.049    653K    35K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6    142.7    139.8      0.61              0.32        14    0.043    103K   5155       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    145.4    125.6      4.91              1.87        64    0.077    653K    35K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     74.3      1.43              0.35        64    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.104, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.71 GB write, 0.12 MB/s write, 0.70 GB read, 0.12 MB/s read, 6.3 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 75.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000719 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3956,71.55 MB,23.5368%) FilterBlock(129,1.77 MB,0.581977%) IndexBlock(129,2.23 MB,0.735037%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:15:32 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-1 ceph-mon[81715]: pgmap v3252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:32 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:33 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:35 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:35 compute-1 ceph-mon[81715]: pgmap v3253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:36 compute-1 podman[249275]: 2026-01-22 15:15:36.096543669 +0000 UTC m=+0.083210712 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:15:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:37 compute-1 ceph-mon[81715]: pgmap v3254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:39 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:39 compute-1 ceph-mon[81715]: pgmap v3255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:39 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-1 ceph-mon[81715]: pgmap v3256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:40 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:41 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:42.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:43 compute-1 ceph-mon[81715]: pgmap v3257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:43 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:43 compute-1 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:44.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:44 compute-1 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:44 compute-1 ceph-mon[81715]: pgmap v3258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:44 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:46 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:46.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:15:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:15:47 compute-1 ceph-mon[81715]: pgmap v3259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:47 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:47 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:47 compute-1 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:48.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:48 compute-1 ceph-mon[81715]: pgmap v3260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:48 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:48.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:49 compute-1 sudo[249301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:49 compute-1 sudo[249301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-1 sudo[249301]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-1 sudo[249326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:15:49 compute-1 sudo[249326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-1 sudo[249326]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-1 sudo[249351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:49 compute-1 sudo[249351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-1 sudo[249351]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:49 compute-1 sudo[249376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:15:49 compute-1 sudo[249376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:50 compute-1 sudo[249376]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:50.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:50 compute-1 ceph-mon[81715]: pgmap v3261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:50 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:51 compute-1 podman[249433]: 2026-01-22 15:15:51.066402606 +0000 UTC m=+0.053327271 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 15:15:51 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:53 compute-1 ceph-mon[81715]: pgmap v3262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:53 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:53 compute-1 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:15:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:15:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:54 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:55 compute-1 ceph-mon[81715]: pgmap v3263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:55 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:56 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-1 ceph-mon[81715]: pgmap v3264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:57 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-1 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:58.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:15:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:59 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:59 compute-1 ceph-mon[81715]: pgmap v3265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:00 compute-1 sudo[249453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:00 compute-1 sudo[249453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:00 compute-1 sudo[249453]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:00 compute-1 sudo[249478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:16:00 compute-1 sudo[249478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:00 compute-1 sudo[249478]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:01 compute-1 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:16:01 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:01 compute-1 ceph-mon[81715]: pgmap v3266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:16:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:16:02 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:02.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:03 compute-1 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:03 compute-1 ceph-mon[81715]: pgmap v3267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:04 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:04 compute-1 ceph-mon[81715]: pgmap v3268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:04.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:04.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:05 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:06 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:06 compute-1 ceph-mon[81715]: pgmap v3269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:07 compute-1 podman[249504]: 2026-01-22 15:16:07.117059412 +0000 UTC m=+0.097708421 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 15:16:07 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:07 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:08.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:09 compute-1 ceph-mon[81715]: pgmap v3270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:09 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:10 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:11 compute-1 ceph-mon[81715]: pgmap v3271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:11 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-1 ceph-mon[81715]: pgmap v3272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:12 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:13 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:13 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:14 compute-1 ceph-mon[81715]: pgmap v3273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:14 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:15 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:17 compute-1 ceph-mon[81715]: pgmap v3274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:17 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-1 ceph-mon[81715]: pgmap v3275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:18 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:16:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:16:20 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:20.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:21 compute-1 ceph-mon[81715]: pgmap v3276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:21 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:22 compute-1 podman[249532]: 2026-01-22 15:16:22.081573855 +0000 UTC m=+0.059653481 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:16:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:22.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:23 compute-1 ceph-mon[81715]: pgmap v3277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:23 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:23 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:24 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:24.411 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:16:24 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:24.412 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:16:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:25 compute-1 ceph-mon[81715]: pgmap v3278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:25 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:26.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:26 compute-1 ceph-mon[81715]: pgmap v3279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:26 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:27 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:27 compute-1 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:28.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:29 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:29 compute-1 ceph-mon[81715]: pgmap v3280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:29 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:29.415 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:16:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:30.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:31 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:31 compute-1 ceph-mon[81715]: pgmap v3281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:32 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:33 compute-1 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:33 compute-1 ceph-mon[81715]: pgmap v3282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:34 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:35 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:35 compute-1 ceph-mon[81715]: pgmap v3283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:35 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:36 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:36 compute-1 ceph-mon[81715]: pgmap v3284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:37 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:37 compute-1 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:37 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:38 compute-1 podman[249552]: 2026-01-22 15:16:38.122522889 +0000 UTC m=+0.099543100 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 15:16:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:39 compute-1 ceph-mon[81715]: pgmap v3285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:39 compute-1 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:16:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:39.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:40 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:40 compute-1 ceph-mon[81715]: pgmap v3286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:40 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:40.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:42 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:43.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:43 compute-1 ceph-mon[81715]: pgmap v3287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:43 compute-1 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:43 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:45 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:45 compute-1 ceph-mon[81715]: pgmap v3288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:45 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.719477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005719596, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1826, "num_deletes": 446, "total_data_size": 3291801, "memory_usage": 3333600, "flush_reason": "Manual Compaction"}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005732442, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 2139298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98708, "largest_seqno": 100529, "table_properties": {"data_size": 2132203, "index_size": 3524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22351, "raw_average_key_size": 22, "raw_value_size": 2115184, "raw_average_value_size": 2149, "num_data_blocks": 153, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 446, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094887, "oldest_key_time": 1769094887, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 12936 microseconds, and 5562 cpu microseconds.
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.732479) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 2139298 bytes OK
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.732495) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733694) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733708) EVENT_LOG_v1 {"time_micros": 1769095005733703, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3282437, prev total WAL file size 3282437, number of live WAL files 2.
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(2089KB)], [204(9894KB)]
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005734572, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 12271557, "oldest_snapshot_seqno": -1}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 14084 keys, 10401332 bytes, temperature: kUnknown
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005786424, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 10401332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10326854, "index_size": 38141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35269, "raw_key_size": 388185, "raw_average_key_size": 27, "raw_value_size": 10089523, "raw_average_value_size": 716, "num_data_blocks": 1367, "num_entries": 14084, "num_filter_entries": 14084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.786684) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 10401332 bytes
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.787849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 200.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 14989, records dropped: 905 output_compression: NoCompression
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.787864) EVENT_LOG_v1 {"time_micros": 1769095005787857, "job": 132, "event": "compaction_finished", "compaction_time_micros": 51918, "compaction_time_cpu_micros": 26482, "output_level": 6, "num_output_files": 1, "total_output_size": 10401332, "num_input_records": 14989, "num_output_records": 14084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005788395, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005790306, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:45 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:16:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:16:47 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:47 compute-1 ceph-mon[81715]: pgmap v3289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:48 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:48 compute-1 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:48 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:48 compute-1 ceph-mon[81715]: pgmap v3290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:48 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:49 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:51 compute-1 ceph-mon[81715]: pgmap v3291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:51 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:52 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:53 compute-1 podman[249578]: 2026-01-22 15:16:53.101815318 +0000 UTC m=+0.090294162 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 22 15:16:53 compute-1 ceph-mon[81715]: pgmap v3292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:53 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:54 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:56 compute-1 ceph-mon[81715]: pgmap v3293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:56 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:56 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:57 compute-1 ceph-mon[81715]: pgmap v3294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:57 compute-1 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:57 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:16:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:58 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:16:58 compute-1 ceph-mon[81715]: pgmap v3295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:58 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:16:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:59 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:17:00 compute-1 sudo[249598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:00 compute-1 sudo[249598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-1 sudo[249598]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-1 sudo[249623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:17:00 compute-1 sudo[249623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-1 sudo[249623]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-1 sudo[249648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:00 compute-1 sudo[249648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-1 sudo[249648]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-1 sudo[249673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:17:00 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 15:17:00 compute-1 sudo[249673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:00 compute-1 ceph-mon[81715]: pgmap v3296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:17:00 compute-1 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:17:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:01 compute-1 podman[249769]: 2026-01-22 15:17:01.191813815 +0000 UTC m=+0.055299844 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 15:17:01 compute-1 podman[249769]: 2026-01-22 15:17:01.287023168 +0000 UTC m=+0.150509167 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:17:01 compute-1 sudo[249673]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:02 compute-1 sudo[249891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:02 compute-1 sudo[249891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:02 compute-1 sudo[249891]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:02 compute-1 sudo[249916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:17:02 compute-1 sudo[249916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:02 compute-1 sudo[249916]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:02 compute-1 sudo[249941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:02 compute-1 sudo[249941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:02 compute-1 sudo[249941]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:02 compute-1 sudo[249966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:17:02 compute-1 sudo[249966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:02 compute-1 ceph-mon[81715]: 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 15:17:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:03 compute-1 sudo[249966]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:03 compute-1 ceph-mon[81715]: pgmap v3297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 15:17:03 compute-1 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:03 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:17:03 compute-1 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:17:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:17:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:05 compute-1 ceph-mon[81715]: pgmap v3298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 15:17:05 compute-1 ceph-mon[81715]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:07.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:07 compute-1 ceph-mon[81715]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:08 compute-1 ceph-mon[81715]: pgmap v3299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 15:17:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-1 ceph-mon[81715]: Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:08 compute-1 ceph-mon[81715]: pgmap v3300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 15:17:08 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:09 compute-1 podman[250021]: 2026-01-22 15:17:09.094768815 +0000 UTC m=+0.086128331 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:17:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:09 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:11 compute-1 ceph-mon[81715]: pgmap v3301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 852 B/s wr, 116 op/s
Jan 22 15:17:11 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:11 compute-1 sudo[250048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:11 compute-1 sudo[250048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:11 compute-1 sudo[250048]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:11 compute-1 sudo[250073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:17:11 compute-1 sudo[250073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:11 compute-1 sudo[250073]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:12 compute-1 ceph-mon[81715]: pgmap v3302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 151 op/s
Jan 22 15:17:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:12 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:14 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:14 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:15.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:15 compute-1 ceph-mon[81715]: pgmap v3303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 682 B/s wr, 131 op/s
Jan 22 15:17:15 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-1 ceph-mon[81715]: pgmap v3304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 682 B/s wr, 175 op/s
Jan 22 15:17:16 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:18 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:17:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:17:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:17:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:17:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:19 compute-1 ceph-mon[81715]: pgmap v3305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 15:17:19 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:17:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:17:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:20.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:21 compute-1 ceph-mon[81715]: pgmap v3306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 15:17:21 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:22.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:23 compute-1 ceph-mon[81715]: pgmap v3307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 15:17:23 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:23 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:24 compute-1 podman[250098]: 2026-01-22 15:17:24.057292564 +0000 UTC m=+0.049259641 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:17:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:24 compute-1 ceph-mon[81715]: pgmap v3308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 15:17:24 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:25 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.022 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:17:25 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.023 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:17:25 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.023 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:17:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:25.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:26 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:28 compute-1 ceph-mon[81715]: pgmap v3309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 15:17:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:28 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:28 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:30 compute-1 ceph-mon[81715]: pgmap v3310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:32 compute-1 ceph-mon[81715]: pgmap v3311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-1 ceph-mon[81715]: pgmap v3312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:33 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-1 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:34.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:35 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:35 compute-1 ceph-mon[81715]: pgmap v3313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:36.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:37 compute-1 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:37 compute-1 ceph-mon[81715]: pgmap v3314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:37 compute-1 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:17:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:38 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:38 compute-1 ceph-mon[81715]: pgmap v3315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:40 compute-1 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:40 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:40 compute-1 podman[250117]: 2026-01-22 15:17:40.135119868 +0000 UTC m=+0.125452564 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 15:17:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:41 compute-1 ceph-mon[81715]: pgmap v3316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:43.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:43 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-1 ceph-mon[81715]: pgmap v3317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:43 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:43 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:44 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:44 compute-1 ceph-mon[81715]: pgmap v3318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:45 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:45 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:17:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:17:47 compute-1 ceph-mon[81715]: pgmap v3319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:49.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:49 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:49 compute-1 ceph-mon[81715]: pgmap v3320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:49 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:50 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:52.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:53.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:53 compute-1 ceph-mon[81715]: pgmap v3321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:53 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-1 ceph-mon[81715]: pgmap v3322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-1 ceph-mon[81715]: pgmap v3323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:55 compute-1 podman[250143]: 2026-01-22 15:17:55.057812738 +0000 UTC m=+0.052252751 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:17:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:55 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:57.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:57 compute-1 ceph-mon[81715]: pgmap v3324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:57 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:58 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:58 compute-1 ceph-mon[81715]: pgmap v3325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:58.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:17:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:59 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:59 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:01 compute-1 ceph-mon[81715]: pgmap v3326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:01 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:02 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:03 compute-1 ceph-mon[81715]: pgmap v3327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:03 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:03 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:04 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:04 compute-1 ceph-mon[81715]: pgmap v3328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:04 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:06 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:06 compute-1 ceph-mon[81715]: pgmap v3329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:06 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:07 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:07 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:08 compute-1 ceph-mon[81715]: pgmap v3330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:08 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:09 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:10 compute-1 ceph-mon[81715]: pgmap v3331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:10 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:11 compute-1 podman[250162]: 2026-01-22 15:18:11.126044545 +0000 UTC m=+0.111770069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:18:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:11 compute-1 sudo[250189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:11 compute-1 sudo[250189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:11 compute-1 sudo[250189]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:12 compute-1 sudo[250214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:18:12 compute-1 sudo[250214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-1 sudo[250214]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-1 sudo[250239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:12 compute-1 sudo[250239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-1 sudo[250239]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-1 sudo[250264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:18:12 compute-1 sudo[250264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-1 sudo[250264]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:13 compute-1 ceph-mon[81715]: pgmap v3332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:13 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:13 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:13.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:14 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:18:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:18:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:15 compute-1 ceph-mon[81715]: pgmap v3333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:15 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-1 ceph-mon[81715]: pgmap v3334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:16 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:17 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:17 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:18 compute-1 ceph-mon[81715]: pgmap v3335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:18 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:18.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:18:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:18:19 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:20 compute-1 sudo[250319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:20 compute-1 sudo[250319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:20 compute-1 sudo[250319]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:20 compute-1 sudo[250344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:18:20 compute-1 sudo[250344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:20 compute-1 sudo[250344]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:20 compute-1 ceph-mon[81715]: pgmap v3336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:20 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:20 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:22.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:23.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:23 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:23 compute-1 ceph-mon[81715]: pgmap v3337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:23 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:25 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:25 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-1 ceph-mon[81715]: pgmap v3338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:25 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:26 compute-1 podman[250369]: 2026-01-22 15:18:26.050393091 +0000 UTC m=+0.045880232 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:18:26 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:27 compute-1 ceph-mon[81715]: pgmap v3339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:27 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:28.013 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:18:28 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:28.014 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:18:28 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-1 ceph-mon[81715]: pgmap v3340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:28 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:29 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:30 compute-1 ceph-mon[81715]: pgmap v3341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:30 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:31.017 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:18:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:31.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:32 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:33 compute-1 ceph-mon[81715]: pgmap v3342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:33 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:33 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:33.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:34 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:35 compute-1 ceph-mon[81715]: pgmap v3343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:35 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:35.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:36 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:37 compute-1 ceph-mon[81715]: pgmap v3344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:37 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:38 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:38 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:39 compute-1 ceph-mon[81715]: pgmap v3345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:39 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:40 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:40 compute-1 ceph-mon[81715]: pgmap v3346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:40 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:42 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:42 compute-1 podman[250389]: 2026-01-22 15:18:42.135699196 +0000 UTC m=+0.127750978 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:18:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:43 compute-1 ceph-mon[81715]: pgmap v3347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:43 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:43 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:44 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:44 compute-1 ceph-mon[81715]: pgmap v3348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:44 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:45.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:45 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:47.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:47 compute-1 ceph-mon[81715]: pgmap v3349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:47 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:18:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.252269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128252344, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 459, "total_data_size": 3514065, "memory_usage": 3584952, "flush_reason": "Manual Compaction"}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 22 15:18:48 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:48 compute-1 ceph-mon[81715]: pgmap v3350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128271992, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 2286898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100534, "largest_seqno": 102414, "table_properties": {"data_size": 2279320, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23062, "raw_average_key_size": 22, "raw_value_size": 2261434, "raw_average_value_size": 2217, "num_data_blocks": 171, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 459, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095006, "oldest_key_time": 1769095006, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 19795 microseconds, and 8514 cpu microseconds.
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.272075) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 2286898 bytes OK
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.272096) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273409) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273429) EVENT_LOG_v1 {"time_micros": 1769095128273422, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3504371, prev total WAL file size 3507902, number of live WAL files 2.
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.274692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373833' seq:72057594037927935, type:22 .. '6C6F676D0035303335' seq:0, type:0; will stop at (end)
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(2233KB)], [207(10157KB)]
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128274726, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 12688230, "oldest_snapshot_seqno": -1}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 14171 keys, 12485492 bytes, temperature: kUnknown
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128343532, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12485492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407855, "index_size": 41108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390186, "raw_average_key_size": 27, "raw_value_size": 12166289, "raw_average_value_size": 858, "num_data_blocks": 1492, "num_entries": 14171, "num_filter_entries": 14171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.343819) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12485492 bytes
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.345467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.2 rd, 181.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 15104, records dropped: 933 output_compression: NoCompression
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.345488) EVENT_LOG_v1 {"time_micros": 1769095128345478, "job": 134, "event": "compaction_finished", "compaction_time_micros": 68882, "compaction_time_cpu_micros": 38707, "output_level": 6, "num_output_files": 1, "total_output_size": 12485492, "num_input_records": 15104, "num_output_records": 14171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128346131, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128348341, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.274578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:50 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:51.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:51 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-1 ceph-mon[81715]: pgmap v3351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:51 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:52 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:54 compute-1 ceph-mon[81715]: pgmap v3352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:54 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:54 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:55.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:55 compute-1 ceph-mon[81715]: pgmap v3353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:55 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:57 compute-1 podman[250415]: 2026-01-22 15:18:57.070765819 +0000 UTC m=+0.055222753 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 15:18:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:57 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:57 compute-1 ceph-mon[81715]: pgmap v3354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:58 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:58 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:58 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:18:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:59 compute-1 ceph-mon[81715]: pgmap v3355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:59 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:00 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:00 compute-1 ceph-mon[81715]: pgmap v3356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:00 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:02 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:03 compute-1 ceph-mon[81715]: pgmap v3357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:03 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:03 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:04 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:04 compute-1 ceph-mon[81715]: pgmap v3358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:04 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:05 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:06 compute-1 ceph-mon[81715]: pgmap v3359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:06 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:09 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:09 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:10 compute-1 ceph-mon[81715]: pgmap v3360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:10 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:10 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:11 compute-1 ceph-mon[81715]: pgmap v3361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:11 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:11 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:12 compute-1 ceph-mon[81715]: pgmap v3362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:12 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:13 compute-1 podman[250434]: 2026-01-22 15:19:13.108673052 +0000 UTC m=+0.105322095 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 15:19:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:13.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:13 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:13 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:14 compute-1 ceph-mon[81715]: pgmap v3363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:14 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:15.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:15 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:16 compute-1 ceph-mon[81715]: pgmap v3364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:16 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:18 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:18 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:19 compute-1 ceph-mon[81715]: pgmap v3365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:19 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:19:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:19:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:20 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:20 compute-1 sudo[250461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:20 compute-1 sudo[250461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:20 compute-1 sudo[250461]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:20 compute-1 sudo[250486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:19:20 compute-1 sudo[250486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:20 compute-1 sudo[250486]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:20 compute-1 sudo[250511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:20 compute-1 sudo[250511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:20 compute-1 sudo[250511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-1 sudo[250536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:19:21 compute-1 sudo[250536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:21 compute-1 sudo[250536]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:21 compute-1 ceph-mon[81715]: pgmap v3366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:21 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:21 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:21 compute-1 sudo[250581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:21 compute-1 sudo[250581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:21 compute-1 sudo[250581]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-1 sudo[250606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:19:21 compute-1 sudo[250606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:21 compute-1 sudo[250606]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-1 sudo[250631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:21 compute-1 sudo[250631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:22 compute-1 sudo[250631]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:22 compute-1 sudo[250656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:19:22 compute-1 sudo[250656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:22 compute-1 sudo[250656]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:22 compute-1 ceph-mon[81715]: pgmap v3367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:22 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:19:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:19:22 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:23.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:23.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:25 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:25.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:25.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:26 compute-1 ceph-mon[81715]: pgmap v3368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:26 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:26 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:27 compute-1 ceph-mon[81715]: pgmap v3369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:27 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:28 compute-1 podman[250712]: 2026-01-22 15:19:28.068723755 +0000 UTC m=+0.052820438 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:19:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:29 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:29 compute-1 ceph-mon[81715]: pgmap v3370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:29 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:29 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:30 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-1 sudo[250731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:31 compute-1 sudo[250731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:31 compute-1 sudo[250731]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:31 compute-1 sudo[250756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:19:31 compute-1 sudo[250756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:31 compute-1 sudo[250756]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:31 compute-1 ceph-mon[81715]: pgmap v3371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:31 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:33 compute-1 ceph-mon[81715]: pgmap v3372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:33 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:33 compute-1 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:34 compute-1 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:35.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:35 compute-1 ceph-mon[81715]: pgmap v3373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:35 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:36 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:37 compute-1 ceph-mon[81715]: pgmap v3374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:37 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:38 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:38 compute-1 ceph-mon[81715]: pgmap v3375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:38 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:38 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:39.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:40 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:41.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:41 compute-1 ceph-mon[81715]: pgmap v3376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:41 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:42 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:42 compute-1 ceph-mon[81715]: pgmap v3377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:42 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:43 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:44 compute-1 podman[250781]: 2026-01-22 15:19:44.108435117 +0000 UTC m=+0.091211657 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:19:44 compute-1 ceph-mon[81715]: pgmap v3378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:44 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:45 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:47 compute-1 ceph-mon[81715]: pgmap v3379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:47 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:47 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:19:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:19:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:48 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:48 compute-1 ceph-mon[81715]: pgmap v3380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:49.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:49 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:49 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:51.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:51 compute-1 ceph-mon[81715]: pgmap v3381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:51 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:52 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:52 compute-1 ceph-mon[81715]: pgmap v3382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:53.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:53.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:53 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:53 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:53 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.223825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195223858, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1177, "num_deletes": 362, "total_data_size": 1969730, "memory_usage": 1995712, "flush_reason": "Manual Compaction"}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195236366, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1293702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102419, "largest_seqno": 103591, "table_properties": {"data_size": 1288682, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15155, "raw_average_key_size": 22, "raw_value_size": 1277050, "raw_average_value_size": 1864, "num_data_blocks": 95, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 362, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095128, "oldest_key_time": 1769095128, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 12635 microseconds, and 4234 cpu microseconds.
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.236450) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1293702 bytes OK
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.236483) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237943) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237971) EVENT_LOG_v1 {"time_micros": 1769095195237961, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1963421, prev total WAL file size 1963421, number of live WAL files 2.
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.239271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1263KB)], [210(11MB)]
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195239329, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13779194, "oldest_snapshot_seqno": -1}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 14117 keys, 12015866 bytes, temperature: kUnknown
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195326754, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 12015866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938718, "index_size": 40747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 389333, "raw_average_key_size": 27, "raw_value_size": 11698197, "raw_average_value_size": 828, "num_data_blocks": 1475, "num_entries": 14117, "num_filter_entries": 14117, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.327070) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12015866 bytes
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.328648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 137.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.9) write-amplify(9.3) OK, records in: 14856, records dropped: 739 output_compression: NoCompression
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.328694) EVENT_LOG_v1 {"time_micros": 1769095195328683, "job": 136, "event": "compaction_finished", "compaction_time_micros": 87513, "compaction_time_cpu_micros": 57746, "output_level": 6, "num_output_files": 1, "total_output_size": 12015866, "num_input_records": 14856, "num_output_records": 14117, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195329152, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195332292, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.239168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:55.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:55 compute-1 ceph-mon[81715]: pgmap v3383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:55 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:56 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:56 compute-1 ceph-mon[81715]: pgmap v3384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:56 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:58 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:59 compute-1 podman[250808]: 2026-01-22 15:19:59.070800322 +0000 UTC m=+0.052898350 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:19:59 compute-1 ceph-mon[81715]: pgmap v3385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:59 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:59 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:19:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:00 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 15:20:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 15:20:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:01 compute-1 ceph-mon[81715]: pgmap v3386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:01 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:01 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:02 compute-1 ceph-mon[81715]: pgmap v3387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:02 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:03.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:03.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:03 compute-1 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:03 compute-1 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:05 compute-1 ceph-mon[81715]: pgmap v3388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:05 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:06 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:07.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:07.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:07 compute-1 ceph-mon[81715]: pgmap v3389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:07 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-1 ceph-mon[81715]: pgmap v3390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:08 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-1 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:09.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:09.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:10 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:10 compute-1 ceph-mon[81715]: pgmap v3391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:11.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:11 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:11 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:13.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:13 compute-1 ceph-mon[81715]: pgmap v3392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:13 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:13 compute-1 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:15 compute-1 podman[250828]: 2026-01-22 15:20:15.08912143 +0000 UTC m=+0.075058814 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:20:15 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:15 compute-1 ceph-mon[81715]: pgmap v3393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:15 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:15.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:16 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:17.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:17 compute-1 ceph-mon[81715]: pgmap v3394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:17 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:17 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:19 compute-1 ceph-mon[81715]: pgmap v3395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:19 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:19 compute-1 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:20:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:20:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:19.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:21.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:21 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:21 compute-1 ceph-mon[81715]: pgmap v3396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:21 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-1 ceph-mon[81715]: pgmap v3397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:23 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:23.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:24 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:25.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:25 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:25 compute-1 ceph-mon[81715]: pgmap v3398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:26 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:26 compute-1 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:26 compute-1 ceph-mon[81715]: pgmap v3399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:27 compute-1 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:27 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:29 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:29 compute-1 ceph-mon[81715]: pgmap v3400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:30 compute-1 podman[250854]: 2026-01-22 15:20:30.087141729 +0000 UTC m=+0.061055048 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 15:20:30 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:31.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:31 compute-1 sudo[250873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:31 compute-1 sudo[250873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-1 sudo[250873]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-1 sudo[250898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:20:31 compute-1 sudo[250898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-1 sudo[250898]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-1 ceph-mon[81715]: pgmap v3401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:31 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-1 sudo[250923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:31 compute-1 sudo[250923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-1 sudo[250923]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-1 sudo[250948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:20:31 compute-1 sudo[250948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:32 compute-1 sudo[250948]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:32 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:32 compute-1 ceph-mon[81715]: pgmap v3402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:20:32 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:20:32 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:33.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:33 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:34 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:34 compute-1 ceph-mon[81715]: pgmap v3403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:20:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:20:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:35.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:36 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:37 compute-1 ceph-mon[81715]: pgmap v3404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:37 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:37.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:38 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:39 compute-1 ceph-mon[81715]: pgmap v3405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:39 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:41.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:41 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:41 compute-1 ceph-mon[81715]: pgmap v3406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:46 compute-1 podman[251006]: 2026-01-22 15:20:46.091686618 +0000 UTC m=+0.081245189 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 15:20:46 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:46 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:46 compute-1 ceph-mon[81715]: pgmap v3407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:20:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:20:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-1 ceph-mon[81715]: pgmap v3408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-1 ceph-mon[81715]: pgmap v3409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:49 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:49 compute-1 ceph-mon[81715]: pgmap v3410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:51 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:51 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:51 compute-1 ceph-mon[81715]: pgmap v3411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:52 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:52 compute-1 ceph-mon[81715]: pgmap v3412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:52 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:53.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:54 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:54 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:55 compute-1 sudo[251032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:55 compute-1 sudo[251032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:55 compute-1 sudo[251032]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:55 compute-1 sudo[251057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:20:55 compute-1 sudo[251057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:55 compute-1 sudo[251057]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:55 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:55 compute-1 ceph-mon[81715]: pgmap v3413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:56 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-1 ceph-mon[81715]: pgmap v3414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:57.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:58 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:58 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:20:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:59 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:20:59 compute-1 ceph-mon[81715]: pgmap v3415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:59 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:01 compute-1 podman[251082]: 2026-01-22 15:21:01.072302952 +0000 UTC m=+0.055499450 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 15:21:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:01 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:01 compute-1 ceph-mon[81715]: pgmap v3416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:01.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:02 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:02 compute-1 ceph-mon[81715]: pgmap v3417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:02 compute-1 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:03 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:03 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:04 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:04 compute-1 ceph-mon[81715]: pgmap v3418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:05.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:05 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:06 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:06 compute-1 ceph-mon[81715]: pgmap v3419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:07 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:07 compute-1 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:08 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:08 compute-1 ceph-mon[81715]: pgmap v3420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:10 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:11 compute-1 ceph-mon[81715]: pgmap v3421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:11 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:21:12 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:13.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:13 compute-1 ceph-mon[81715]: pgmap v3422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:13 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-1 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:13 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:14 compute-1 ceph-mon[81715]: pgmap v3423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:14 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:15.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:16 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-1 podman[251102]: 2026-01-22 15:21:17.137600083 +0000 UTC m=+0.122051564 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 15:21:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:17.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:17 compute-1 ceph-mon[81715]: pgmap v3424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:17 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:17.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:18 compute-1 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:18 compute-1 ceph-mon[81715]: pgmap v3425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:18 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:19.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:19.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:21:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:21:19 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:20 compute-1 ceph-mon[81715]: pgmap v3426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:20 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:21.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:21 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:22 compute-1 ceph-mon[81715]: pgmap v3427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:22 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:22 compute-1 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:23.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:23 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:24 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:25 compute-1 ceph-mon[81715]: pgmap v3428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:25 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:27 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:27 compute-1 ceph-mon[81715]: pgmap v3429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:27 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:27.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:28 compute-1 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:28 compute-1 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:28 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:29 compute-1 ceph-mon[81715]: pgmap v3430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:29 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:30 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:30 compute-1 ceph-mon[81715]: pgmap v3431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:30 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:32 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:32 compute-1 podman[251130]: 2026-01-22 15:21:32.075133963 +0000 UTC m=+0.069925697 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:21:33 compute-1 ceph-mon[81715]: pgmap v3432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:33 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:33 compute-1 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:33.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:33.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:34 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:35 compute-1 ceph-mon[81715]: pgmap v3433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:35 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:36 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:37 compute-1 ceph-mon[81715]: pgmap v3434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:37 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:38 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:38 compute-1 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:38 compute-1 ceph-mon[81715]: pgmap v3435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:39 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:40 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:40 compute-1 ceph-mon[81715]: pgmap v3436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:40 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:41 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:43 compute-1 ceph-mon[81715]: pgmap v3437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:43 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:43 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:44 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:45 compute-1 ceph-mon[81715]: pgmap v3438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:45 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:45.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:46 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:46 compute-1 ceph-mon[81715]: pgmap v3439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:21:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:21:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:47 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:47 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:47 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:48 compute-1 podman[251150]: 2026-01-22 15:21:48.134431825 +0000 UTC m=+0.130256919 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:21:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:48 compute-1 ceph-mon[81715]: pgmap v3440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:48 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:49.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:49 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:50 compute-1 ceph-mon[81715]: pgmap v3441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:50 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:51.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:52 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:53 compute-1 ceph-mon[81715]: pgmap v3442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:53 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:53 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:54 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:55 compute-1 ceph-mon[81715]: pgmap v3443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:55 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:55 compute-1 sudo[251177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:55 compute-1 sudo[251177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-1 sudo[251177]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-1 sudo[251202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:21:55 compute-1 sudo[251202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-1 sudo[251202]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:55 compute-1 sudo[251227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:55 compute-1 sudo[251227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-1 sudo[251227]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:21:55.603 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:21:55 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:21:55.604 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:21:55 compute-1 sudo[251252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:21:55 compute-1 sudo[251252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:56 compute-1 sudo[251252]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:56 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:56 compute-1 ceph-mon[81715]: pgmap v3444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:21:56 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:21:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:57 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:21:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:21:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:21:57 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:21:57 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:58 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:58 compute-1 ceph-mon[81715]: pgmap v3445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:58 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:58 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:21:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:00 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:00 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:22:00.606 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:22:01 compute-1 ceph-mon[81715]: pgmap v3446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:01 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:02 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:02 compute-1 ceph-mon[81715]: pgmap v3447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:03 compute-1 podman[251307]: 2026-01-22 15:22:03.051589025 +0000 UTC m=+0.047270762 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 22 15:22:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:03 compute-1 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:03 compute-1 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:03 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:04 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:04 compute-1 ceph-mon[81715]: pgmap v3448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:04 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:06 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:06 compute-1 ceph-mon[81715]: pgmap v3449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:07 compute-1 sudo[251327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:07 compute-1 sudo[251327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-1 sudo[251327]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:07 compute-1 sudo[251352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:22:07 compute-1 sudo[251352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-1 sudo[251352]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:07 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:22:07 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:22:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:08 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:08 compute-1 ceph-mon[81715]: pgmap v3450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:08 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:08 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:09 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:11 compute-1 ceph-mon[81715]: pgmap v3451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:11 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:11.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:12 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:14 compute-1 ceph-mon[81715]: pgmap v3452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:14 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:14 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:14 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:14 compute-1 ceph-mon[81715]: pgmap v3453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:14 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 15:22:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 15:22:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:16 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:17 compute-1 ceph-mon[81715]: pgmap v3454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:17 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:22:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:22:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:22:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:22:18 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:18 compute-1 ceph-mon[81715]: pgmap v3455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:18 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:18 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:19 compute-1 podman[251378]: 2026-01-22 15:22:19.088092152 +0000 UTC m=+0.077813329 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 15:22:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:22:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:22:19 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:21 compute-1 ceph-mon[81715]: pgmap v3456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:21 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:22 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:23.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:23 compute-1 ceph-mon[81715]: pgmap v3457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:23 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:23 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:23 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 15:22:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:24 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:24 compute-1 ceph-mon[81715]: pgmap v3458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:24 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:25.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:25 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:27 compute-1 ceph-mon[81715]: pgmap v3459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 15:22:27 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:27.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:28 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:28 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:29 compute-1 ceph-mon[81715]: pgmap v3460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 15:22:29 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:29 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:31 compute-1 ceph-mon[81715]: pgmap v3461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 12 op/s
Jan 22 15:22:31 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:32 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:32 compute-1 ceph-mon[81715]: pgmap v3462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:32 compute-1 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:33.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:33 compute-1 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:33 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:34 compute-1 podman[251405]: 2026-01-22 15:22:34.053655703 +0000 UTC m=+0.048577547 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 15:22:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:34 compute-1 ceph-mon[81715]: pgmap v3463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:34 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:35 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:36 compute-1 ceph-mon[81715]: pgmap v3464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:36 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:38 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:38 compute-1 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:39.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:39 compute-1 ceph-mon[81715]: pgmap v3465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 15:22:39 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:39 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:39.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:40 compute-1 ceph-mon[81715]: pgmap v3466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 15:22:40 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:41 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:42 compute-1 ceph-mon[81715]: pgmap v3467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 15:22:42 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:42 compute-1 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:43 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:44 compute-1 sshd[165237]: Timeout before authentication for connection from 106.13.27.219 to 38.102.83.119, pid = 251004
Jan 22 15:22:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:44 compute-1 ceph-mon[81715]: pgmap v3468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:44 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:45.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:45 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:46 compute-1 ceph-mon[81715]: pgmap v3469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:46 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:47.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:22:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:22:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:47 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:47 compute-1 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:48 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:48 compute-1 ceph-mon[81715]: pgmap v3470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:49 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:50 compute-1 podman[251424]: 2026-01-22 15:22:50.087956453 +0000 UTC m=+0.073518443 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:22:51 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:51 compute-1 ceph-mon[81715]: pgmap v3471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:52 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:53 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:53 compute-1 ceph-mon[81715]: pgmap v3472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:53 compute-1 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:53.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:53.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:54 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:55 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:55 compute-1 ceph-mon[81715]: pgmap v3473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:55.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:55.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:56 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:57 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:57 compute-1 ceph-mon[81715]: pgmap v3474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:57.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:58 compute-1 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:58 compute-1 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:59 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:22:59 compute-1 ceph-mon[81715]: pgmap v3475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:22:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.598544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380598846, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 2802, "num_deletes": 569, "total_data_size": 5294219, "memory_usage": 5379024, "flush_reason": "Manual Compaction"}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 22 15:23:00 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:23:00 compute-1 ceph-mon[81715]: pgmap v3476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:00 compute-1 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380622698, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 3454007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103596, "largest_seqno": 106393, "table_properties": {"data_size": 3443431, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 33366, "raw_average_key_size": 23, "raw_value_size": 3417977, "raw_average_value_size": 2380, "num_data_blocks": 250, "num_entries": 1436, "num_filter_entries": 1436, "num_deletions": 569, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095195, "oldest_key_time": 1769095195, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 23959 microseconds, and 8128 cpu microseconds.
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.622749) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 3454007 bytes OK
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.622770) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625195) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625212) EVENT_LOG_v1 {"time_micros": 1769095380625206, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 5280288, prev total WAL file size 5280288, number of live WAL files 2.
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.626613) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(3373KB)], [213(11MB)]
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380626783, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 15469873, "oldest_snapshot_seqno": -1}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 14400 keys, 13609847 bytes, temperature: kUnknown
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380738593, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 13609847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13529146, "index_size": 43596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 394685, "raw_average_key_size": 27, "raw_value_size": 13282098, "raw_average_value_size": 922, "num_data_blocks": 1597, "num_entries": 14400, "num_filter_entries": 14400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.738915) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13609847 bytes
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.2 rd, 121.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.5 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15553, records dropped: 1153 output_compression: NoCompression
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740335) EVENT_LOG_v1 {"time_micros": 1769095380740326, "job": 138, "event": "compaction_finished", "compaction_time_micros": 111956, "compaction_time_cpu_micros": 64747, "output_level": 6, "num_output_files": 1, "total_output_size": 13609847, "num_input_records": 15553, "num_output_records": 14400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380741339, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380743711, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.626461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:01 compute-1 sshd[165237]: drop connection #0 from [106.13.27.219]:41136 on [38.102.83.119]:22 penalty: exceeded LoginGraceTime
Jan 22 15:23:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:01 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:02 compute-1 sshd[165237]: drop connection #0 from [106.13.27.219]:55868 on [38.102.83.119]:22 penalty: exceeded LoginGraceTime
Jan 22 15:23:02 compute-1 ceph-mon[81715]: pgmap v3477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:02 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:03 compute-1 sshd[165237]: drop connection #0 from [106.13.27.219]:55878 on [38.102.83.119]:22 penalty: exceeded LoginGraceTime
Jan 22 15:23:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:04 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:05 compute-1 podman[251451]: 2026-01-22 15:23:05.05256525 +0000 UTC m=+0.047445487 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:23:05 compute-1 ceph-mon[81715]: pgmap v3478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:05 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:06 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:07 compute-1 ceph-mon[81715]: pgmap v3479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:07 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:07 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:07 compute-1 sudo[251470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:07 compute-1 sudo[251470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-1 sudo[251470]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:07.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:07 compute-1 sudo[251495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:23:07 compute-1 sudo[251495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-1 sudo[251495]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-1 sudo[251520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:07 compute-1 sudo[251520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-1 sudo[251520]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-1 sudo[251545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:23:07 compute-1 sudo[251545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:07 compute-1 sudo[251545]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:08 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:08 compute-1 ceph-mon[81715]: pgmap v3480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:23:08 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:23:09 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:09.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:10 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:10 compute-1 ceph-mon[81715]: pgmap v3481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:11 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:12 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:12 compute-1 ceph-mon[81715]: pgmap v3482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:13.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:13 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:13 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:14 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:14 compute-1 ceph-mon[81715]: pgmap v3483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:15 compute-1 sudo[251601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:15 compute-1 sudo[251601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:15 compute-1 sudo[251601]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:15 compute-1 sudo[251626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:23:15 compute-1 sudo[251626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:15 compute-1 sudo[251626]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:15.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:16 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:17 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:17 compute-1 ceph-mon[81715]: pgmap v3484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:17.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.730237) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397730286, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 512, "num_deletes": 278, "total_data_size": 528974, "memory_usage": 538224, "flush_reason": "Manual Compaction"}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397734104, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 315797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106398, "largest_seqno": 106905, "table_properties": {"data_size": 313176, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8080, "raw_average_key_size": 21, "raw_value_size": 307427, "raw_average_value_size": 819, "num_data_blocks": 25, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 278, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095381, "oldest_key_time": 1769095381, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 3886 microseconds, and 1850 cpu microseconds.
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734133) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 315797 bytes OK
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734146) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735178) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735188) EVENT_LOG_v1 {"time_micros": 1769095397735184, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735202) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 525772, prev total WAL file size 525772, number of live WAL files 2.
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(308KB)], [216(12MB)]
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397735536, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 13925644, "oldest_snapshot_seqno": -1}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 14209 keys, 10036727 bytes, temperature: kUnknown
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397797432, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 10036727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961922, "index_size": 38148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35589, "raw_key_size": 390795, "raw_average_key_size": 27, "raw_value_size": 9722924, "raw_average_value_size": 684, "num_data_blocks": 1369, "num_entries": 14209, "num_filter_entries": 14209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.797764) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10036727 bytes
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.799110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 161.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.9) write-amplify(31.8) OK, records in: 14775, records dropped: 566 output_compression: NoCompression
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.799162) EVENT_LOG_v1 {"time_micros": 1769095397799121, "job": 140, "event": "compaction_finished", "compaction_time_micros": 61976, "compaction_time_cpu_micros": 26962, "output_level": 6, "num_output_files": 1, "total_output_size": 10036727, "num_input_records": 14775, "num_output_records": 14209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397799370, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397802155, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:18 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:18 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:23:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:23:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:23:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:23:19 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:19 compute-1 ceph-mon[81715]: pgmap v3485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:23:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:23:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:19.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:20 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:21 compute-1 podman[251651]: 2026-01-22 15:23:21.143140104 +0000 UTC m=+0.118165113 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:23:21 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:21 compute-1 ceph-mon[81715]: pgmap v3486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:22 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:22 compute-1 ceph-mon[81715]: pgmap v3487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:23 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:23 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:23.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:23.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:24 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:24 compute-1 ceph-mon[81715]: pgmap v3488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:25 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:26 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:26 compute-1 ceph-mon[81715]: pgmap v3489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:27.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:27 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:27 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:27 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:29 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:29 compute-1 ceph-mon[81715]: pgmap v3490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:29.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:30 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:31 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:31 compute-1 ceph-mon[81715]: pgmap v3491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:32 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:32 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:32 compute-1 ceph-mon[81715]: pgmap v3492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:33 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:33 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:35 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:35 compute-1 ceph-mon[81715]: pgmap v3493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:36 compute-1 podman[251679]: 2026-01-22 15:23:36.055630411 +0000 UTC m=+0.048265089 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 15:23:36 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:37 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-1 ceph-mon[81715]: pgmap v3494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:37 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:38 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:38 compute-1 ceph-mon[81715]: pgmap v3495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:38 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:40 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:40 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:40 compute-1 ceph-mon[81715]: pgmap v3496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:41 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:43.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:43 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:43 compute-1 ceph-mon[81715]: pgmap v3497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:43 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:44 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:44 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:44 compute-1 ceph-mon[81715]: pgmap v3498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:45 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:46 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:46 compute-1 ceph-mon[81715]: pgmap v3499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:23:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.523 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:23:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:47 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:47 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:48 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:48 compute-1 ceph-mon[81715]: pgmap v3500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:50 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:51 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:51 compute-1 ceph-mon[81715]: pgmap v3501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:52 compute-1 podman[251698]: 2026-01-22 15:23:52.085716923 +0000 UTC m=+0.077643405 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:23:52 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:53 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:53 compute-1 ceph-mon[81715]: pgmap v3502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:53 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:54 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:55 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:55 compute-1 ceph-mon[81715]: pgmap v3503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:55.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:56 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:57 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:57 compute-1 ceph-mon[81715]: pgmap v3504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:57.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #220. Immutable memtables: 0.
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.776176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 220
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437776269, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 325, "total_data_size": 1094797, "memory_usage": 1111488, "flush_reason": "Manual Compaction"}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #221: started
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437782162, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 221, "file_size": 717997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106910, "largest_seqno": 107717, "table_properties": {"data_size": 714385, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10658, "raw_average_key_size": 20, "raw_value_size": 706207, "raw_average_value_size": 1368, "num_data_blocks": 53, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 325, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095398, "oldest_key_time": 1769095398, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 5982 microseconds, and 2558 cpu microseconds.
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.782205) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #221: 717997 bytes OK
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.782223) [db/memtable_list.cc:519] [default] Level-0 commit table #221 started
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783376) [db/memtable_list.cc:722] [default] Level-0 commit table #221: memtable #1 done
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783390) EVENT_LOG_v1 {"time_micros": 1769095437783385, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1090180, prev total WAL file size 1090180, number of live WAL files 2.
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000217.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035303334' seq:72057594037927935, type:22 .. '6C6F676D0035323837' seq:0, type:0; will stop at (end)
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [221(701KB)], [219(9801KB)]
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437783973, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [221], "files_L6": [219], "score": -1, "input_data_size": 10754724, "oldest_snapshot_seqno": -1}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #222: 14066 keys, 10584317 bytes, temperature: kUnknown
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437842857, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 222, "file_size": 10584317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509560, "index_size": 38484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35205, "raw_key_size": 388480, "raw_average_key_size": 27, "raw_value_size": 10272136, "raw_average_value_size": 730, "num_data_blocks": 1380, "num_entries": 14066, "num_filter_entries": 14066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 222, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.843163) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 10584317 bytes
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.845085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.3 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(29.7) write-amplify(14.7) OK, records in: 14725, records dropped: 659 output_compression: NoCompression
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.845104) EVENT_LOG_v1 {"time_micros": 1769095437845094, "job": 142, "event": "compaction_finished", "compaction_time_micros": 58979, "compaction_time_cpu_micros": 25424, "output_level": 6, "num_output_files": 1, "total_output_size": 10584317, "num_input_records": 14725, "num_output_records": 14066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437845340, "job": 142, "event": "table_file_deletion", "file_number": 221}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000219.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437846993, "job": 142, "event": "table_file_deletion", "file_number": 219}
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:58 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:58 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:58 compute-1 ceph-mon[81715]: pgmap v3505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:58 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:23:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:59.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:59 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:01 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:01 compute-1 ceph-mon[81715]: pgmap v3506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:02 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:02 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:02 compute-1 ceph-mon[81715]: pgmap v3507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:04 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:04 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:05 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:05 compute-1 ceph-mon[81715]: pgmap v3508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:06 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:07 compute-1 podman[251724]: 2026-01-22 15:24:07.060640075 +0000 UTC m=+0.048862453 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 15:24:07 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:07 compute-1 ceph-mon[81715]: pgmap v3509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:08 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:08 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:09 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:09 compute-1 ceph-mon[81715]: pgmap v3510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:10 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:11 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:11 compute-1 ceph-mon[81715]: pgmap v3511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:24:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:12 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:13.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:13.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:13 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-1 ceph-mon[81715]: pgmap v3512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 15:24:13 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-1 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:15 compute-1 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:15 compute-1 ceph-mon[81715]: pgmap v3513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 586 KiB/s wr, 19 op/s
Jan 22 15:24:15 compute-1 sudo[251743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:15 compute-1 sudo[251743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-1 sudo[251743]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-1 sudo[251768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:24:15 compute-1 sudo[251768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-1 sudo[251768]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-1 sudo[251793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:15 compute-1 sudo[251793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-1 sudo[251793]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:15 compute-1 sudo[251818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:24:15 compute-1 sudo[251818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:16 compute-1 sudo[251818]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:16 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:17 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:17 compute-1 ceph-mon[81715]: pgmap v3514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:24:17 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:24:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:18 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:18 compute-1 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:24:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:24:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:24:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:24:19 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-1 ceph-mon[81715]: pgmap v3515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:19 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:24:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:24:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:20 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:20 compute-1 ceph-mon[81715]: pgmap v3516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:21 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:22 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:22 compute-1 ceph-mon[81715]: pgmap v3517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:23 compute-1 sudo[251875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:23 compute-1 sudo[251875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:23 compute-1 sudo[251875]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:23 compute-1 sudo[251914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:24:23 compute-1 sudo[251914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:23 compute-1 sudo[251914]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:23 compute-1 podman[251874]: 2026-01-22 15:24:23.116247249 +0000 UTC m=+0.105994459 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:24:23 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:23 compute-1 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:24 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:24 compute-1 ceph-mon[81715]: pgmap v3518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Jan 22 15:24:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:25 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:25.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:26 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:26 compute-1 ceph-mon[81715]: pgmap v3519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 845 KiB/s wr, 17 op/s
Jan 22 15:24:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:27.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:27.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:27 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:27 compute-1 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:29 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:29 compute-1 ceph-mon[81715]: pgmap v3520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:29.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:30 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:31 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:31 compute-1 ceph-mon[81715]: pgmap v3521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:32 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:33 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:33 compute-1 ceph-mon[81715]: pgmap v3522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:33 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:24:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 48K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 16K writes, 5677 syncs, 2.87 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 984 writes, 1511 keys, 984 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 984 writes, 466 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:24:35 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:35 compute-1 ceph-mon[81715]: pgmap v3523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:36 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:37 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:37 compute-1 ceph-mon[81715]: pgmap v3524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:37 compute-1 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:38 compute-1 podman[251951]: 2026-01-22 15:24:38.058517199 +0000 UTC m=+0.046509190 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:24:38 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:38 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:38 compute-1 ceph-mon[81715]: pgmap v3525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:39.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:39 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:41 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:41 compute-1 ceph-mon[81715]: pgmap v3526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:42 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:43 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:43 compute-1 ceph-mon[81715]: pgmap v3527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:43 compute-1 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:44 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:45 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:45 compute-1 ceph-mon[81715]: pgmap v3528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:46 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:46 compute-1 ceph-mon[81715]: pgmap v3529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:47 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.524 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.524 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:24:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:24:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:48 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:48 compute-1 ceph-mon[81715]: pgmap v3530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:48 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:49 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:50 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:50 compute-1 ceph-mon[81715]: pgmap v3531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:51.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:51.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:51 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:53 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:53 compute-1 ceph-mon[81715]: pgmap v3532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:53 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:53.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:54 compute-1 podman[251971]: 2026-01-22 15:24:54.141608355 +0000 UTC m=+0.130038258 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:24:54 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:55 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:55 compute-1 ceph-mon[81715]: pgmap v3533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:55 compute-1 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:55.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:55.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:56 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:56 compute-1 ceph-mon[81715]: pgmap v3534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:57 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:57.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:58 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:58 compute-1 ceph-mon[81715]: pgmap v3535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:58 compute-1 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:59.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:59 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:24:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:00 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:00 compute-1 ceph-mon[81715]: pgmap v3536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:01.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:01 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:03 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:03 compute-1 ceph-mon[81715]: pgmap v3537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:03 compute-1 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:03.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:04 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:05 compute-1 ceph-mon[81715]: pgmap v3538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:05 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:05.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:05.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:06 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-1 ceph-mon[81715]: pgmap v3539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:07 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:07.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:07.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:08 compute-1 ceph-mon[81715]: pgmap v3540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:08 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:08 compute-1 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:09 compute-1 podman[251998]: 2026-01-22 15:25:09.054390428 +0000 UTC m=+0.047027363 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:25:09 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:09.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:10 compute-1 ceph-mon[81715]: pgmap v3541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:10 compute-1 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:11.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:11.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:12 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:13 compute-1 ceph-mon[81715]: pgmap v3542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:13 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:13 compute-1 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:13.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:14 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:15 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:15 compute-1 ceph-mon[81715]: pgmap v3543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:15 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:15.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:16 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:16 compute-1 ceph-mon[81715]: pgmap v3544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:17 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:17.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:18 compute-1 ceph-mon[81715]: pgmap v3545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:18 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:18 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:25:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:25:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:19 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:19.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:20 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:20 compute-1 ceph-mon[81715]: pgmap v3546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:21.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:22 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:22 compute-1 ceph-mon[81715]: pgmap v3547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:23 compute-1 sudo[252017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:23 compute-1 sudo[252017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-1 sudo[252017]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:23 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:23 compute-1 sudo[252042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:25:23 compute-1 sudo[252042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-1 sudo[252042]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-1 sudo[252067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:23 compute-1 sudo[252067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-1 sudo[252067]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-1 sudo[252092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:25:23 compute-1 sudo[252092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:23.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:23.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:23 compute-1 sudo[252092]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:24 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:24 compute-1 ceph-mon[81715]: pgmap v3548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:25:24 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:25:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:25 compute-1 podman[252149]: 2026-01-22 15:25:25.081418008 +0000 UTC m=+0.075200565 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 15:25:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:25.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:25 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:25 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:26 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:26 compute-1 ceph-mon[81715]: pgmap v3549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:27 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:28 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:28 compute-1 ceph-mon[81715]: pgmap v3550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:28 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:29.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:29 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:30 compute-1 sudo[252175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:30 compute-1 sudo[252175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:30 compute-1 sudo[252175]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:30 compute-1 sudo[252200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:25:30 compute-1 sudo[252200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:30 compute-1 sudo[252200]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:31 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:31 compute-1 ceph-mon[81715]: pgmap v3551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:31 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:31.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:25:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 109K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1696 writes, 9789 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 16.34 MB, 0.03 MB/s
                                           Interval WAL: 1696 writes, 1696 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.8      1.52              0.38        71    0.021       0      0       0.0       0.0
                                             L6      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    147.5    127.6      5.35              2.11        70    0.076    743K    40K       0.0       0.0
                                            Sum      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.9    114.9    116.4      6.87              2.49       141    0.049    743K    40K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    144.6    145.4      0.52              0.27        12    0.043     90K   4955       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    147.5    127.6      5.35              2.11        70    0.076    743K    40K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.9      1.51              0.38        70    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.114, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.78 GB write, 0.12 MB/s write, 0.77 GB read, 0.12 MB/s read, 6.9 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 83.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000534 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4362,78.95 MB,25.9711%) FilterBlock(141,2.02 MB,0.663491%) IndexBlock(141,2.50 MB,0.823397%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:25:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:32 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:33 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:33 compute-1 ceph-mon[81715]: pgmap v3552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:33 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:33.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:33.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:34 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:35 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:35 compute-1 ceph-mon[81715]: pgmap v3553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:36 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:37 compute-1 ceph-mon[81715]: pgmap v3554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:37 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:37.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:38 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:38 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:39 compute-1 ceph-mon[81715]: pgmap v3555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:39 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:39.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:40 compute-1 podman[252225]: 2026-01-22 15:25:40.049925338 +0000 UTC m=+0.046080198 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:25:40 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:42 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:42 compute-1 ceph-mon[81715]: pgmap v3556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:42 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:43 compute-1 ceph-mon[81715]: pgmap v3557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:43 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:43 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:44 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:45 compute-1 ceph-mon[81715]: pgmap v3558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:45 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:47 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:25:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:25:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:48 compute-1 ceph-mon[81715]: pgmap v3559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:48 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:48 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:48 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:49 compute-1 ceph-mon[81715]: pgmap v3560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:49 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:49.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:49.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:50 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:50 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:50 compute-1 ceph-mon[81715]: pgmap v3561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:51 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:51.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:52 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:52 compute-1 ceph-mon[81715]: pgmap v3562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:53.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:53 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:53 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #223. Immutable memtables: 0.
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.063910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 223
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554063937, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 1923, "num_deletes": 449, "total_data_size": 3295031, "memory_usage": 3359200, "flush_reason": "Manual Compaction"}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #224: started
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554078163, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 224, "file_size": 2150401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 107722, "largest_seqno": 109640, "table_properties": {"data_size": 2143190, "index_size": 3576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23099, "raw_average_key_size": 22, "raw_value_size": 2125694, "raw_average_value_size": 2088, "num_data_blocks": 155, "num_entries": 1018, "num_filter_entries": 1018, "num_deletions": 449, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095438, "oldest_key_time": 1769095438, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 14306 microseconds, and 4747 cpu microseconds.
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.078214) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #224: 2150401 bytes OK
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.078230) [db/memtable_list.cc:519] [default] Level-0 commit table #224 started
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079736) [db/memtable_list.cc:722] [default] Level-0 commit table #224: memtable #1 done
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079747) EVENT_LOG_v1 {"time_micros": 1769095554079744, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 3285297, prev total WAL file size 3285297, number of live WAL files 2.
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000220.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.080444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [224(2100KB)], [222(10MB)]
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554080468, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [224], "files_L6": [222], "score": -1, "input_data_size": 12734718, "oldest_snapshot_seqno": -1}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #225: 14173 keys, 10835853 bytes, temperature: kUnknown
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554155576, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 225, "file_size": 10835853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10760225, "index_size": 39099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390633, "raw_average_key_size": 27, "raw_value_size": 10520785, "raw_average_value_size": 742, "num_data_blocks": 1404, "num_entries": 14173, "num_filter_entries": 14173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 225, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.156132) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 10835853 bytes
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.157773) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 143.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 15084, records dropped: 911 output_compression: NoCompression
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.157793) EVENT_LOG_v1 {"time_micros": 1769095554157783, "job": 144, "event": "compaction_finished", "compaction_time_micros": 75352, "compaction_time_cpu_micros": 27297, "output_level": 6, "num_output_files": 1, "total_output_size": 10835853, "num_input_records": 15084, "num_output_records": 14173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554158901, "job": 144, "event": "table_file_deletion", "file_number": 224}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000222.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554161268, "job": 144, "event": "table_file_deletion", "file_number": 222}
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.080409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:55 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:55 compute-1 ceph-mon[81715]: pgmap v3563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:55.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:55.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:56 compute-1 podman[252245]: 2026-01-22 15:25:56.09145759 +0000 UTC m=+0.082214745 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:25:56 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:57 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:57 compute-1 ceph-mon[81715]: pgmap v3564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:58 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:58 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:59 compute-1 ceph-mon[81715]: pgmap v3565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:59 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:25:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:00 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:01.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:01 compute-1 ceph-mon[81715]: pgmap v3566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:01 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:03 compute-1 ceph-mon[81715]: pgmap v3567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:03 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:03.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:03.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:04 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:04 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:05 compute-1 ceph-mon[81715]: pgmap v3568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:05 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:05.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:05.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:06 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:07 compute-1 ceph-mon[81715]: pgmap v3569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:07 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:07.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:07.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:08 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:08 compute-1 ceph-mon[81715]: pgmap v3570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:08 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:09 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:09.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:10 compute-1 ceph-mon[81715]: pgmap v3571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:10 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:11 compute-1 podman[252271]: 2026-01-22 15:26:11.0688847 +0000 UTC m=+0.056943702 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 15:26:11 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:11.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:12 compute-1 ceph-mon[81715]: pgmap v3572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:12 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:13 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:13 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:13.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:13.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:14 compute-1 ceph-mon[81715]: pgmap v3573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:14 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:15 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:16 compute-1 ceph-mon[81715]: pgmap v3574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:16 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:17 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:17.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:26:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:26:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:26:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:26:18 compute-1 ceph-mon[81715]: pgmap v3575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:18 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:18 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:26:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:26:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:20 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:21 compute-1 ceph-mon[81715]: pgmap v3576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:21 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:21.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:21.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:23 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:23.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:24 compute-1 ceph-mon[81715]: pgmap v3577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:24 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:24 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:24 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:24 compute-1 ceph-mon[81715]: pgmap v3578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:25 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:25.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:25.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:26 compute-1 sshd-session[252290]: Invalid user admin from 2.57.121.112 port 10541
Jan 22 15:26:26 compute-1 podman[252292]: 2026-01-22 15:26:26.37573038 +0000 UTC m=+0.172599020 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:26:26 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:26 compute-1 ceph-mon[81715]: pgmap v3579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:26 compute-1 sshd-session[252290]: Received disconnect from 2.57.121.112 port 10541:11: Bye [preauth]
Jan 22 15:26:26 compute-1 sshd-session[252290]: Disconnected from invalid user admin 2.57.121.112 port 10541 [preauth]
Jan 22 15:26:27 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:27.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:28 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:28 compute-1 ceph-mon[81715]: pgmap v3580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:28 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:28 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:29.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:29.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:30 compute-1 sudo[252320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:30 compute-1 sudo[252320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-1 sudo[252320]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-1 sudo[252345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:26:30 compute-1 sudo[252345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-1 sudo[252345]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-1 sudo[252370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:30 compute-1 sudo[252370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-1 sudo[252370]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-1 sudo[252395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:26:30 compute-1 sudo[252395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:30 compute-1 ceph-mon[81715]: pgmap v3581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:31 compute-1 sudo[252395]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:31.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:31.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:31 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:33 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:33 compute-1 ceph-mon[81715]: pgmap v3582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:26:33 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:26:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:34 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:35 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:35 compute-1 ceph-mon[81715]: pgmap v3583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:36 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:37 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:37 compute-1 ceph-mon[81715]: pgmap v3584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:37.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:38 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:38 compute-1 ceph-mon[81715]: pgmap v3585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:38 compute-1 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:39 compute-1 sudo[252451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:39 compute-1 sudo[252451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:39 compute-1 sudo[252451]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:39 compute-1 sudo[252476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:26:39 compute-1 sudo[252476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:39 compute-1 sudo[252476]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:39 compute-1 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:39 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:39.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:40 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:40 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:40 compute-1 ceph-mon[81715]: pgmap v3586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:41 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:41.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:42 compute-1 podman[252501]: 2026-01-22 15:26:42.111375488 +0000 UTC m=+0.087177449 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:26:42 compute-1 ceph-mon[81715]: pgmap v3587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:43 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:43 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:44 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:44 compute-1 ceph-mon[81715]: pgmap v3588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:45 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:46 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:46 compute-1 ceph-mon[81715]: pgmap v3589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:26:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:26:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:47 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:47 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:47 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:49 compute-1 ceph-mon[81715]: pgmap v3590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:49 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:50 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:51 compute-1 ceph-mon[81715]: pgmap v3591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:51 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:52 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:53 compute-1 ceph-mon[81715]: pgmap v3592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:53 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:53 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:54 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:55 compute-1 ceph-mon[81715]: pgmap v3593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:55 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:56 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:57 compute-1 podman[252518]: 2026-01-22 15:26:57.097688968 +0000 UTC m=+0.090638082 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:26:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:57.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:58 compute-1 ceph-mon[81715]: pgmap v3594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:58 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:58 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:59 compute-1 ceph-mon[81715]: pgmap v3595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:59 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:59 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:26:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:00 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:01 compute-1 ceph-mon[81715]: pgmap v3596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:01.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:01.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:02 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:03 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:03 compute-1 ceph-mon[81715]: pgmap v3597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 15:27:03 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:04 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:05 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:05 compute-1 ceph-mon[81715]: pgmap v3598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 15:27:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:06 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:07 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:07 compute-1 ceph-mon[81715]: pgmap v3599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:08 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:08 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:09 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:09 compute-1 ceph-mon[81715]: pgmap v3600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:10 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:10 compute-1 ceph-mon[81715]: pgmap v3601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:11 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:12 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:12 compute-1 ceph-mon[81715]: pgmap v3602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:13 compute-1 podman[252544]: 2026-01-22 15:27:13.063455981 +0000 UTC m=+0.053191059 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:27:13 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:13 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:14 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:14 compute-1 ceph-mon[81715]: pgmap v3603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Jan 22 15:27:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:15 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:16 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:16 compute-1 ceph-mon[81715]: pgmap v3604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 112 op/s
Jan 22 15:27:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:17 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:17 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:18 compute-1 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:18 compute-1 ceph-mon[81715]: pgmap v3605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:18 compute-1 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:27:19 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:27:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:27:19 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:27:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:19.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:20 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:27:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:27:21 compute-1 ceph-mon[81715]: pgmap v3606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:21 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:22 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:22 compute-1 ceph-mon[81715]: pgmap v3607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:23 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:23 compute-1 ceph-mon[81715]: Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:24 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:24 compute-1 ceph-mon[81715]: pgmap v3608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:25 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:25.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:27 compute-1 ceph-mon[81715]: pgmap v3609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:28 compute-1 podman[252563]: 2026-01-22 15:27:28.104875431 +0000 UTC m=+0.091419124 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:27:28 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:28 compute-1 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:28 compute-1 ceph-mon[81715]: pgmap v3610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:28 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:28 compute-1 ceph-mon[81715]: Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:29 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:30 compute-1 ceph-mon[81715]: pgmap v3611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:30 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:32 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:33 compute-1 ceph-mon[81715]: pgmap v3612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:33 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:33 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:34 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:35 compute-1 ceph-mon[81715]: pgmap v3613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:35 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:35.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:36 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:37 compute-1 ceph-mon[81715]: pgmap v3614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:37 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:37.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:38 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:38 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:39 compute-1 ceph-mon[81715]: pgmap v3615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:39 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:39 compute-1 sudo[252589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:39 compute-1 sudo[252589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-1 sudo[252589]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-1 sudo[252614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:27:39 compute-1 sudo[252614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-1 sudo[252614]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-1 sudo[252639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:39 compute-1 sudo[252639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-1 sudo[252639]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:39 compute-1 sudo[252664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:27:39 compute-1 sudo[252664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:39.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:40 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:40 compute-1 podman[252761]: 2026-01-22 15:27:40.430610485 +0000 UTC m=+0.058971696 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:27:40 compute-1 podman[252761]: 2026-01-22 15:27:40.587157799 +0000 UTC m=+0.215518960 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 15:27:41 compute-1 sudo[252664]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:41 compute-1 ceph-mon[81715]: pgmap v3616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:41 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:41.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:42 compute-1 sudo[252883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:42 compute-1 sudo[252883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:42 compute-1 sudo[252883]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:42 compute-1 sudo[252908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:27:42 compute-1 sudo[252908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:42 compute-1 sudo[252908]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:42 compute-1 sudo[252933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:42 compute-1 sudo[252933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:42 compute-1 sudo[252933]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:42 compute-1 sudo[252958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:27:42 compute-1 sudo[252958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:42 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:42 compute-1 ceph-mon[81715]: pgmap v3617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:42 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:42 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:43 compute-1 sudo[252958]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:43.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:43 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:43 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:27:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:27:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:44 compute-1 podman[253014]: 2026-01-22 15:27:44.071527939 +0000 UTC m=+0.051625527 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 15:27:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:44 compute-1 ceph-mon[81715]: pgmap v3618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:44 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:45 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:47 compute-1 ceph-mon[81715]: pgmap v3619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:47 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.527 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:27:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:27:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:48.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:48 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:48 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:49 compute-1 ceph-mon[81715]: pgmap v3620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:49 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:49.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:50.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:50 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:50 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:50 compute-1 sudo[253031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:50 compute-1 sudo[253031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-1 sudo[253031]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:50 compute-1 sudo[253056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:27:50 compute-1 sudo[253056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-1 sudo[253056]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:51 compute-1 ceph-mon[81715]: pgmap v3621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:51 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:52.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:52 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:53 compute-1 ceph-mon[81715]: pgmap v3622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:53 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:53 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:54 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:55 compute-1 ceph-mon[81715]: pgmap v3623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:55 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:56 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:57 compute-1 ceph-mon[81715]: pgmap v3624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:57 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:58.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:58 compute-1 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:58 compute-1 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:59 compute-1 podman[253081]: 2026-01-22 15:27:59.126800405 +0000 UTC m=+0.107455069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 15:27:59 compute-1 ceph-mon[81715]: pgmap v3625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:59 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:27:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:27:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:00.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:00 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:01 compute-1 ceph-mon[81715]: pgmap v3626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:01 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:02.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:02 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:02 compute-1 ceph-mon[81715]: pgmap v3627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:03 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:03 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:03.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:04.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:04 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:04 compute-1 ceph-mon[81715]: pgmap v3628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:05 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:05 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:06.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:06 compute-1 ceph-mon[81715]: pgmap v3629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:06 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:08 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:09 compute-1 ceph-mon[81715]: pgmap v3630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:09 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:09 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:10 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:11 compute-1 ceph-mon[81715]: pgmap v3631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:11 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:12 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:12 compute-1 ceph-mon[81715]: pgmap v3632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:13 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:13 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:14 compute-1 podman[253107]: 2026-01-22 15:28:14.893470202 +0000 UTC m=+0.076617744 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:28:15 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:15 compute-1 ceph-mon[81715]: pgmap v3633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:15 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:16 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:16 compute-1 ceph-mon[81715]: pgmap v3634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:16 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:17 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:28:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:28:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:28:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:28:18 compute-1 ceph-mon[81715]: pgmap v3635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:18 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:18 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:28:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:28:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:21 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:21 compute-1 ceph-mon[81715]: pgmap v3636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:21 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:22 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:23 compute-1 ceph-mon[81715]: pgmap v3637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:23 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:24 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:24 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:25 compute-1 ceph-mon[81715]: pgmap v3638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:25 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:25.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:26.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:27 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:27 compute-1 ceph-mon[81715]: pgmap v3639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:27 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:28 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:28 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:29 compute-1 ceph-mon[81715]: pgmap v3640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:29 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:29 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:29.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:30.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:30 compute-1 podman[253126]: 2026-01-22 15:28:30.107572215 +0000 UTC m=+0.093671916 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 15:28:30 compute-1 ceph-mon[81715]: pgmap v3641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:30 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:31 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:32 compute-1 ceph-mon[81715]: pgmap v3642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:32 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:34 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:34 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:35 compute-1 ceph-mon[81715]: pgmap v3643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:35 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:36 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:36 compute-1 ceph-mon[81715]: pgmap v3644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:36 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:38 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:38 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:39 compute-1 ceph-mon[81715]: pgmap v3645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:39 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:40 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:40 compute-1 ceph-mon[81715]: pgmap v3646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:40 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:42 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #226. Immutable memtables: 0.
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.498586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 226
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723498733, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 2684, "num_deletes": 542, "total_data_size": 4857881, "memory_usage": 4935864, "flush_reason": "Manual Compaction"}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #227: started
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723547929, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 227, "file_size": 3165791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 109645, "largest_seqno": 112324, "table_properties": {"data_size": 3155857, "index_size": 5403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31224, "raw_average_key_size": 22, "raw_value_size": 3131924, "raw_average_value_size": 2304, "num_data_blocks": 228, "num_entries": 1359, "num_filter_entries": 1359, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095554, "oldest_key_time": 1769095554, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 49580 microseconds, and 7850 cpu microseconds.
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.548269) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #227: 3165791 bytes OK
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.548394) [db/memtable_list.cc:519] [default] Level-0 commit table #227 started
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549796) [db/memtable_list.cc:722] [default] Level-0 commit table #227: memtable #1 done
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549808) EVENT_LOG_v1 {"time_micros": 1769095723549804, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549824) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 4844635, prev total WAL file size 4844635, number of live WAL files 2.
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000223.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.551440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035323836' seq:72057594037927935, type:22 .. '6C6F676D0035353339' seq:0, type:0; will stop at (end)
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [227(3091KB)], [225(10MB)]
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723551520, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [227], "files_L6": [225], "score": -1, "input_data_size": 14001644, "oldest_snapshot_seqno": -1}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #228: 14433 keys, 13750079 bytes, temperature: kUnknown
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723632643, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 228, "file_size": 13750079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13669738, "index_size": 43172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 396006, "raw_average_key_size": 27, "raw_value_size": 13422929, "raw_average_value_size": 930, "num_data_blocks": 1574, "num_entries": 14433, "num_filter_entries": 14433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 228, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.633035) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 13750079 bytes
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.634309) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.3 rd, 169.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.3) OK, records in: 15532, records dropped: 1099 output_compression: NoCompression
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.634328) EVENT_LOG_v1 {"time_micros": 1769095723634318, "job": 146, "event": "compaction_finished", "compaction_time_micros": 81269, "compaction_time_cpu_micros": 36049, "output_level": 6, "num_output_files": 1, "total_output_size": 13750079, "num_input_records": 15532, "num_output_records": 14433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723635005, "job": 146, "event": "table_file_deletion", "file_number": 227}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000225.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723636936, "job": 146, "event": "table_file_deletion", "file_number": 225}
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.551316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:43 compute-1 ceph-mon[81715]: pgmap v3647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:43 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:43 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:43 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:45 compute-1 podman[253152]: 2026-01-22 15:28:45.077413451 +0000 UTC m=+0.060889978 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:28:45 compute-1 ceph-mon[81715]: pgmap v3648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:45 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:46 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #229. Immutable memtables: 0.
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.885346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 229
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726885397, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 258, "total_data_size": 128067, "memory_usage": 135096, "flush_reason": "Manual Compaction"}
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #230: started
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726888071, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 230, "file_size": 83592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112329, "largest_seqno": 112632, "table_properties": {"data_size": 81614, "index_size": 141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5312, "raw_average_key_size": 18, "raw_value_size": 77703, "raw_average_value_size": 274, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095723, "oldest_key_time": 1769095723, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 2771 microseconds, and 1012 cpu microseconds.
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.888120) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #230: 83592 bytes OK
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.888142) [db/memtable_list.cc:519] [default] Level-0 commit table #230 started
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889185) [db/memtable_list.cc:722] [default] Level-0 commit table #230: memtable #1 done
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889198) EVENT_LOG_v1 {"time_micros": 1769095726889194, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 125797, prev total WAL file size 125797, number of live WAL files 2.
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000226.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [230(81KB)], [228(13MB)]
Jan 22 15:28:46 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726889885, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [230], "files_L6": [228], "score": -1, "input_data_size": 13833671, "oldest_snapshot_seqno": -1}
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #231: 14193 keys, 12058972 bytes, temperature: kUnknown
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727448084, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 231, "file_size": 12058972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11981514, "index_size": 40865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 391693, "raw_average_key_size": 27, "raw_value_size": 11740070, "raw_average_value_size": 827, "num_data_blocks": 1472, "num_entries": 14193, "num_filter_entries": 14193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 231, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:28:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.448495) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 12058972 bytes
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.773345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.8 rd, 21.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(309.8) write-amplify(144.3) OK, records in: 14716, records dropped: 523 output_compression: NoCompression
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.773412) EVENT_LOG_v1 {"time_micros": 1769095727773388, "job": 148, "event": "compaction_finished", "compaction_time_micros": 558287, "compaction_time_cpu_micros": 41618, "output_level": 6, "num_output_files": 1, "total_output_size": 12058972, "num_input_records": 14716, "num_output_records": 14193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727773799, "job": 148, "event": "table_file_deletion", "file_number": 230}
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000228.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727778406, "job": 148, "event": "table_file_deletion", "file_number": 228}
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-1 ceph-mon[81715]: pgmap v3649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:47 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:48.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:49 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:49 compute-1 ceph-mon[81715]: pgmap v3650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:49 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:49 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:50 compute-1 sudo[253171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-1 sudo[253171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-1 sudo[253171]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:50 compute-1 sudo[253196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:28:50 compute-1 sudo[253196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-1 sudo[253196]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-1 sudo[253221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-1 sudo[253221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-1 sudo[253221]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-1 sudo[253246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:28:50 compute-1 sudo[253246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:51 compute-1 sudo[253246]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:51 compute-1 ceph-mon[81715]: pgmap v3651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:51 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:51 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:54 compute-1 ceph-mon[81715]: pgmap v3652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:28:54 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:28:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:28:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:55 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:55 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:55 compute-1 ceph-mon[81715]: pgmap v3653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:56 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:56 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:56 compute-1 ceph-mon[81715]: pgmap v3654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:57 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:57 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:58.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:59 compute-1 ceph-mon[81715]: pgmap v3655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:59 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:59 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:28:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:59.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:00 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:00 compute-1 ceph-mon[81715]: pgmap v3656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:01 compute-1 podman[253303]: 2026-01-22 15:29:01.14871736 +0000 UTC m=+0.134839798 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 15:29:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:02 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-1 sudo[253329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:02 compute-1 sudo[253329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:02 compute-1 sudo[253329]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:02 compute-1 sudo[253354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:29:02 compute-1 sudo[253354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:02 compute-1 sudo[253354]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:02 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-1 ceph-mon[81715]: pgmap v3657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:29:02 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:29:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:04 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:04 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:05 compute-1 ceph-mon[81715]: pgmap v3658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:05 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:07 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:07 compute-1 ceph-mon[81715]: pgmap v3659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:07 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:08 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:08 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:09 compute-1 ceph-mon[81715]: pgmap v3660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:09 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:11 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:11 compute-1 ceph-mon[81715]: pgmap v3661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:11 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:12 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:13 compute-1 ceph-mon[81715]: pgmap v3662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:13 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:14 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:14 compute-1 ceph-mon[81715]: pgmap v3663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:14 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:15 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:16 compute-1 podman[253380]: 2026-01-22 15:29:16.080757954 +0000 UTC m=+0.066245064 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Jan 22 15:29:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:17 compute-1 ceph-mon[81715]: pgmap v3664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:17 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:17 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:18 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:19 compute-1 ceph-mon[81715]: pgmap v3665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:19 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:29:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:29:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:20 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:21 compute-1 ceph-mon[81715]: pgmap v3666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:21 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:21 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:22 compute-1 ceph-mon[81715]: pgmap v3667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:22 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:23 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:23 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:24 compute-1 ceph-mon[81715]: pgmap v3668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:24 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:25 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:26.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:26 compute-1 ceph-mon[81715]: pgmap v3669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:26 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:28 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:29 compute-1 ceph-mon[81715]: pgmap v3670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:29 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:29 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:30 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:30 compute-1 ceph-mon[81715]: pgmap v3671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:30 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:31 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:32 compute-1 podman[253401]: 2026-01-22 15:29:32.117596709 +0000 UTC m=+0.094254931 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:29:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:33 compute-1 ceph-mon[81715]: pgmap v3672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:33 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:34 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:34 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:35 compute-1 ceph-mon[81715]: pgmap v3673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:35 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:36 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:36 compute-1 ceph-mon[81715]: pgmap v3674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:36 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:38 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:40 compute-1 ceph-mon[81715]: pgmap v3675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:40 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:41 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-1 ceph-mon[81715]: pgmap v3676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:41 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:42.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:42 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-1 ceph-mon[81715]: pgmap v3677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:43 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:43 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:44.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:44.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:44 compute-1 ceph-mon[81715]: pgmap v3678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:44 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:45 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:46.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:47 compute-1 podman[253428]: 2026-01-22 15:29:47.053405874 +0000 UTC m=+0.048352439 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:29:47 compute-1 ceph-mon[81715]: pgmap v3679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:47 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:29:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:29:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:48.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:48 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:48 compute-1 ceph-mon[81715]: pgmap v3680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:48 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:48 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:50.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:50.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:50 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:51 compute-1 ceph-mon[81715]: pgmap v3681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:51 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:52.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:52 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:53 compute-1 ceph-mon[81715]: pgmap v3682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:53 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:53 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:54.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:54 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:54 compute-1 ceph-mon[81715]: pgmap v3683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:55 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:56.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:56 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:56 compute-1 ceph-mon[81715]: pgmap v3684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:56 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:57 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:29:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:58 compute-1 ceph-mon[81715]: pgmap v3685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:58 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:58 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:59 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:00.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:00 compute-1 ceph-mon[81715]: pgmap v3686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 15:30:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 15:30:00 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:02 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:02 compute-1 sudo[253448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:02 compute-1 sudo[253448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:02 compute-1 sudo[253448]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:02 compute-1 sudo[253479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:30:02 compute-1 sudo[253479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:02 compute-1 sudo[253479]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 sudo[253518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:03 compute-1 sudo[253518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-1 sudo[253518]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 podman[253472]: 2026-01-22 15:30:03.024313288 +0000 UTC m=+0.186198268 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:30:03 compute-1 sudo[253549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:30:03 compute-1 sudo[253549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-1 ceph-mon[81715]: pgmap v3687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:03 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:03 compute-1 sudo[253549]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 sudo[253594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:03 compute-1 sudo[253594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-1 sudo[253594]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 sudo[253619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:30:03 compute-1 sudo[253619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-1 sudo[253619]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 sudo[253644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:03 compute-1 sudo[253644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-1 sudo[253644]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-1 sudo[253669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:30:03 compute-1 sudo[253669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:04.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:04 compute-1 sudo[253669]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:04 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:04 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:05 compute-1 ceph-mon[81715]: pgmap v3688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:30:05 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:30:05 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:06 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:06 compute-1 ceph-mon[81715]: pgmap v3689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:07 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:09 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:09 compute-1 ceph-mon[81715]: pgmap v3690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:09 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:10 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:10 compute-1 sudo[253726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:10 compute-1 sudo[253726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:10 compute-1 sudo[253726]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:10 compute-1 sudo[253751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:30:10 compute-1 sudo[253751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:10 compute-1 sudo[253751]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:11 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:11 compute-1 ceph-mon[81715]: pgmap v3691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:12 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:13 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:13 compute-1 ceph-mon[81715]: pgmap v3692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:14.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:14 compute-1 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:14 compute-1 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:15 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:15 compute-1 ceph-mon[81715]: pgmap v3693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:16 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:17 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:17 compute-1 ceph-mon[81715]: pgmap v3694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:18 compute-1 podman[253776]: 2026-01-22 15:30:18.063138809 +0000 UTC m=+0.059309286 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:30:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:18.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:18 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:18 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:19 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:19 compute-1 ceph-mon[81715]: pgmap v3695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:30:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:30:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:20 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:20 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:20 compute-1 ceph-mon[81715]: pgmap v3696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:21 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:22.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:23 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:23 compute-1 ceph-mon[81715]: pgmap v3697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:24 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:24 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:25 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:25 compute-1 ceph-mon[81715]: pgmap v3698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:26 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:26 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:27 compute-1 ceph-mon[81715]: pgmap v3699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:27 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:28.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:28 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:28 compute-1 ceph-mon[81715]: pgmap v3700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:28 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:29 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:29 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:30 compute-1 ceph-mon[81715]: pgmap v3701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:30 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:31 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:32 compute-1 ceph-mon[81715]: pgmap v3702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:32 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:34 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:34 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:34 compute-1 podman[253795]: 2026-01-22 15:30:34.131256891 +0000 UTC m=+0.117762687 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 15:30:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:34.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:35 compute-1 ceph-mon[81715]: pgmap v3703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:35 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:37 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:38.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:38 compute-1 ceph-mon[81715]: pgmap v3704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:38 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:38 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:38 compute-1 ceph-mon[81715]: pgmap v3705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:38 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:39 compute-1 ceph-mon[81715]: 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:30:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:40 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:40 compute-1 ceph-mon[81715]: pgmap v3706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:40 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:41 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:43 compute-1 ceph-mon[81715]: pgmap v3707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:43 compute-1 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:44 compute-1 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:44 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:45 compute-1 ceph-mon[81715]: pgmap v3708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:45 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:46 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:30:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:30:47 compute-1 ceph-mon[81715]: pgmap v3709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:47 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:48.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:48 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:48 compute-1 ceph-mon[81715]: pgmap v3710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:48 compute-1 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:48 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:49 compute-1 podman[253821]: 2026-01-22 15:30:49.06322714 +0000 UTC m=+0.055691637 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 15:30:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:49 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:50 compute-1 ceph-mon[81715]: pgmap v3711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:50 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:51 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:52.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:53 compute-1 ceph-mon[81715]: pgmap v3712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:53 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:54 compute-1 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:54 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:55 compute-1 ceph-mon[81715]: pgmap v3713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:55 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:56 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:57 compute-1 ceph-mon[81715]: pgmap v3714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:57 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:30:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:58 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:58 compute-1 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:59 compute-1 ceph-mon[81715]: pgmap v3715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:59 compute-1 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:59 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:00.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:01 compute-1 ceph-mon[81715]: pgmap v3716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:01 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:02.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:02 compute-1 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:03 compute-1 ceph-mon[81715]: pgmap v3717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:03 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:04 compute-1 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:04 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:05 compute-1 podman[253841]: 2026-01-22 15:31:05.143082761 +0000 UTC m=+0.128319213 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 15:31:05 compute-1 ceph-mon[81715]: pgmap v3718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:05 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:06.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:06 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:06 compute-1 ceph-mon[81715]: pgmap v3719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:07 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:07 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:08.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:09 compute-1 ceph-mon[81715]: pgmap v3720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:09 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:09 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:10.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:10 compute-1 sudo[253868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:10 compute-1 sudo[253868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-1 sudo[253868]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-1 sudo[253893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:31:10 compute-1 sudo[253893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-1 sudo[253893]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-1 sudo[253918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:10 compute-1 sudo[253918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-1 sudo[253918]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-1 sudo[253943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:31:10 compute-1 sudo[253943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:11 compute-1 ceph-mon[81715]: pgmap v3721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:11 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:11 compute-1 sudo[253943]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:31:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:31:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:12 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:13 compute-1 ceph-mon[81715]: pgmap v3722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:13 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:14.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:14 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:14 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:31:14 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:31:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:15 compute-1 ceph-mon[81715]: pgmap v3723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:15 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:16.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:16 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:16 compute-1 ceph-mon[81715]: pgmap v3724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:17 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:18.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:18.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:31:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:31:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:31:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:31:18 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:18 compute-1 ceph-mon[81715]: pgmap v3725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:18 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:19 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:31:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:31:19 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:20 compute-1 podman[254000]: 2026-01-22 15:31:20.071359491 +0000 UTC m=+0.056528250 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:31:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:20 compute-1 ceph-mon[81715]: pgmap v3726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:20 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:21 compute-1 sudo[254021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:21 compute-1 sudo[254021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:21 compute-1 sudo[254021]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:21 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:22 compute-1 sudo[254046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:31:22 compute-1 sudo[254046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:22 compute-1 sudo[254046]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:22.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:23 compute-1 ceph-mon[81715]: pgmap v3727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:23 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:24.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:24 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:24 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:25 compute-1 ceph-mon[81715]: pgmap v3728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:25 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:26 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:27 compute-1 ceph-mon[81715]: pgmap v3729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:27 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:28 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:29 compute-1 ceph-mon[81715]: pgmap v3730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:29 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:29 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:30.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:30 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:31 compute-1 ceph-mon[81715]: pgmap v3731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:31 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:32.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:32 compute-1 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:32 compute-1 ceph-mon[81715]: pgmap v3732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:33 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:33 compute-1 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:34.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:34 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:34 compute-1 ceph-mon[81715]: pgmap v3733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:35 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:36 compute-1 podman[254071]: 2026-01-22 15:31:36.147224302 +0000 UTC m=+0.128319503 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 15:31:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:36.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:36 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:36 compute-1 ceph-mon[81715]: pgmap v3734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:38 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:38.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #232. Immutable memtables: 0.
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.419081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 232
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898419141, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2750, "num_deletes": 543, "total_data_size": 5142616, "memory_usage": 5238040, "flush_reason": "Manual Compaction"}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #233: started
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898440636, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 233, "file_size": 2094155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112637, "largest_seqno": 115382, "table_properties": {"data_size": 2085806, "index_size": 3950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 31336, "raw_average_key_size": 23, "raw_value_size": 2063839, "raw_average_value_size": 1570, "num_data_blocks": 166, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095727, "oldest_key_time": 1769095727, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 21621 microseconds, and 10136 cpu microseconds.
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440712) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #233: 2094155 bytes OK
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440730) [db/memtable_list.cc:519] [default] Level-0 commit table #233 started
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443578) [db/memtable_list.cc:722] [default] Level-0 commit table #233: memtable #1 done
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443594) EVENT_LOG_v1 {"time_micros": 1769095898443589, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 5129027, prev total WAL file size 5137294, number of live WAL files 2.
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000229.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.445001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323538' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [233(2045KB)], [231(11MB)]
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898445034, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [233], "files_L6": [231], "score": -1, "input_data_size": 14153127, "oldest_snapshot_seqno": -1}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #234: 14484 keys, 11396640 bytes, temperature: kUnknown
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898526967, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 234, "file_size": 11396640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11319240, "index_size": 40103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 397009, "raw_average_key_size": 27, "raw_value_size": 11074711, "raw_average_value_size": 764, "num_data_blocks": 1444, "num_entries": 14484, "num_filter_entries": 14484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 234, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.527322) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 11396640 bytes
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.528719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.6 rd, 139.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 15507, records dropped: 1023 output_compression: NoCompression
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.528743) EVENT_LOG_v1 {"time_micros": 1769095898528733, "job": 150, "event": "compaction_finished", "compaction_time_micros": 82012, "compaction_time_cpu_micros": 39537, "output_level": 6, "num_output_files": 1, "total_output_size": 11396640, "num_input_records": 15507, "num_output_records": 14484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898529276, "job": 150, "event": "table_file_deletion", "file_number": 233}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000231.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898531889, "job": 150, "event": "table_file_deletion", "file_number": 231}
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.444921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:39 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:39 compute-1 ceph-mon[81715]: pgmap v3735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:39 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:41 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:41 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:41 compute-1 ceph-mon[81715]: pgmap v3736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:42 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:42 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-1 ceph-mon[81715]: pgmap v3737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:43 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:44.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:44 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:44 compute-1 ceph-mon[81715]: pgmap v3738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:44 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:46 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:47 compute-1 ceph-mon[81715]: pgmap v3739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:47 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.531 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:31:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.531 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:31:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:48 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:48 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:49 compute-1 ceph-mon[81715]: pgmap v3740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:49 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:49 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:51 compute-1 podman[254097]: 2026-01-22 15:31:51.078864433 +0000 UTC m=+0.066997943 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 15:31:51 compute-1 ceph-mon[81715]: pgmap v3741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:51 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:52 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:52 compute-1 ceph-mon[81715]: pgmap v3742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:54 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:54 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:55 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:55 compute-1 ceph-mon[81715]: pgmap v3743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:55 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:56.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:56 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:57 compute-1 ceph-mon[81715]: pgmap v3744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:57 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:31:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:58 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:58 compute-1 ceph-mon[81715]: pgmap v3745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:58 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:59 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:00.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:00 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:00 compute-1 ceph-mon[81715]: pgmap v3746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:02 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:02.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:02.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:03 compute-1 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:03 compute-1 ceph-mon[81715]: pgmap v3747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:04.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:04 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:04 compute-1 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:04 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:05 compute-1 ceph-mon[81715]: pgmap v3748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:05 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:06.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:07 compute-1 podman[254116]: 2026-01-22 15:32:07.110514167 +0000 UTC m=+0.091411443 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 22 15:32:07 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:07 compute-1 ceph-mon[81715]: pgmap v3749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:08 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:08.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:08.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:09 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:09 compute-1 ceph-mon[81715]: pgmap v3750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:09 compute-1 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:10.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:10 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:11 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:11 compute-1 ceph-mon[81715]: pgmap v3751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:11 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:12.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:12.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:12 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:12 compute-1 ceph-mon[81715]: pgmap v3752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:14.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:14 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-1 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:16 compute-1 ceph-mon[81715]: pgmap v3753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:16 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:16.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:17 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:17 compute-1 ceph-mon[81715]: pgmap v3754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:18 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:20.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:20 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:20 compute-1 ceph-mon[81715]: pgmap v3755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:20 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:20 compute-1 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:32:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:32:22 compute-1 podman[254144]: 2026-01-22 15:32:22.070491435 +0000 UTC m=+0.060962941 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 15:32:22 compute-1 sudo[254163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:22 compute-1 sudo[254163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-1 sudo[254163]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:22.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:22 compute-1 sudo[254188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:32:22 compute-1 sudo[254188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-1 sudo[254188]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-1 sudo[254213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:22 compute-1 sudo[254213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-1 sudo[254213]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:22 compute-1 ceph-mon[81715]: pgmap v3756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:22 compute-1 sudo[254238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:32:22 compute-1 sudo[254238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:23 compute-1 sudo[254238]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:23.895 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:32:23 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:23.896 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:32:23 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-1 ceph-mon[81715]: pgmap v3757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:23 compute-1 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:32:23 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:32:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:25 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:25 compute-1 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:32:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:32:25 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:32:25 compute-1 ceph-mon[81715]: pgmap v3758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:27 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:29 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:29 compute-1 ceph-mon[81715]: pgmap v3759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:29 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:29 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:29 compute-1 ceph-mon[81715]: pgmap v3760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:29 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:31 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:31 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:31 compute-1 ceph-mon[81715]: pgmap v3761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:31.898 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:32:32 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:33 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:33 compute-1 ceph-mon[81715]: pgmap v3762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:33 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:34 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:34 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:34.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:35 compute-1 sudo[254294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:35 compute-1 sudo[254294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:35 compute-1 sudo[254294]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:35 compute-1 sudo[254319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:32:35 compute-1 sudo[254319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:35 compute-1 sudo[254319]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:35 compute-1 ceph-mon[81715]: pgmap v3763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:35 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:35 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:36 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:37 compute-1 ceph-mon[81715]: pgmap v3764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:37 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:38 compute-1 podman[254344]: 2026-01-22 15:32:38.153430519 +0000 UTC m=+0.129239667 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 15:32:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:38 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:38 compute-1 ceph-mon[81715]: pgmap v3765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:39 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:39 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:32:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:40 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:41 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:41 compute-1 ceph-mon[81715]: pgmap v3766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:42 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:42 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:42 compute-1 ceph-mon[81715]: pgmap v3767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:43 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:32:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:44.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:44 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:44.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:44 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:44 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:44 compute-1 ceph-mon[81715]: pgmap v3768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:45 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:32:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:46 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:47 compute-1 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:32:47 compute-1 ceph-mon[81715]: pgmap v3769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:32:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:32:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:48 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:48 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:48 compute-1 ceph-mon[81715]: pgmap v3770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:50 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:50 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:52 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:52 compute-1 ceph-mon[81715]: pgmap v3771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:53 compute-1 podman[254370]: 2026-01-22 15:32:53.074586767 +0000 UTC m=+0.054980519 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:32:53 compute-1 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:53 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:53 compute-1 ceph-mon[81715]: pgmap v3772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:54 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:54 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:54 compute-1 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:54 compute-1 ceph-mon[81715]: pgmap v3773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:56 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:56.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:56.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:57 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:57 compute-1 ceph-mon[81715]: pgmap v3774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:58 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:32:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:58.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:59 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:59 compute-1 ceph-mon[81715]: pgmap v3775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:59 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:00.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:00 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:01 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:01 compute-1 ceph-mon[81715]: pgmap v3776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:01 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:03 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:03 compute-1 ceph-mon[81715]: pgmap v3777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:04.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:04 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:04 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:04 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:04 compute-1 ceph-mon[81715]: pgmap v3778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:05 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:08 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:08 compute-1 ceph-mon[81715]: pgmap v3779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:08.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:09 compute-1 podman[254390]: 2026-01-22 15:33:09.107943471 +0000 UTC m=+0.095587896 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 15:33:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:09 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:09 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:09 compute-1 ceph-mon[81715]: pgmap v3780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:09 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:11 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:11 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:11 compute-1 ceph-mon[81715]: pgmap v3781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:12 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:13 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:13 compute-1 ceph-mon[81715]: pgmap v3782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:13 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:14.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:14 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:14 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:14 compute-1 ceph-mon[81715]: pgmap v3783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:15 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:16 compute-1 ceph-mon[81715]: pgmap v3784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:16 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:17 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:18.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:18 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:18 compute-1 ceph-mon[81715]: pgmap v3785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-1 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:20 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:20.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:21 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:21 compute-1 ceph-mon[81715]: pgmap v3786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:24 compute-1 podman[254416]: 2026-01-22 15:33:24.06539459 +0000 UTC m=+0.051576037 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:33:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:24 compute-1 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:24 compute-1 ceph-mon[81715]: pgmap v3787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:24 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:26 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:26 compute-1 ceph-mon[81715]: pgmap v3788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:26 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:28 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-1 ceph-mon[81715]: pgmap v3789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:28 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:28 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:28.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:29 compute-1 ceph-mon[81715]: pgmap v3790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:29 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:30 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:30 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:31 compute-1 ceph-mon[81715]: pgmap v3791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:31 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:32 compute-1 ceph-mon[81715]: pgmap v3792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:32 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:34 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:34 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:34.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:35 compute-1 ceph-mon[81715]: pgmap v3793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:35 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:35 compute-1 sudo[254435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:35 compute-1 sudo[254435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-1 sudo[254435]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-1 sudo[254460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:33:35 compute-1 sudo[254460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-1 sudo[254460]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-1 sudo[254485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:35 compute-1 sudo[254485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-1 sudo[254485]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-1 sudo[254510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:33:35 compute-1 sudo[254510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-1 sudo[254510]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:36 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:33:36 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:33:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:36.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:37 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:37 compute-1 ceph-mon[81715]: pgmap v3794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:33:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:33:37 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:33:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:38.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:38.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:38 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:38 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:38 compute-1 ceph-mon[81715]: pgmap v3795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:39 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:39 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:40 compute-1 podman[254566]: 2026-01-22 15:33:40.118514136 +0000 UTC m=+0.092933946 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 15:33:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:40.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:40 compute-1 ceph-mon[81715]: pgmap v3796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:40 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:41 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:42 compute-1 sudo[254593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:42 compute-1 sudo[254593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:42 compute-1 sudo[254593]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:42 compute-1 sudo[254618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:33:42 compute-1 sudo[254618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:42 compute-1 sudo[254618]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:43 compute-1 ceph-mon[81715]: pgmap v3797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:43 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:44.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:44 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:44 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:44 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:45 compute-1 ceph-mon[81715]: pgmap v3798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:45 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:46.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:47 compute-1 ceph-mon[81715]: pgmap v3799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:47 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.534 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.534 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:33:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:33:48 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:48.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:49 compute-1 ceph-mon[81715]: pgmap v3800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:49 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:49 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:50 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:51 compute-1 ceph-mon[81715]: pgmap v3801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:51 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:51 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:52 compute-1 ceph-mon[81715]: pgmap v3802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:52 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #235. Immutable memtables: 0.
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.005185) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 151] Flushing memtable with next log file: 235
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034006045, "job": 151, "event": "flush_started", "num_memtables": 1, "num_entries": 2316, "num_deletes": 736, "total_data_size": 3741023, "memory_usage": 3824208, "flush_reason": "Manual Compaction"}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 151] Level-0 flush table #236: started
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034029180, "cf_name": "default", "job": 151, "event": "table_file_creation", "file_number": 236, "file_size": 2442903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115387, "largest_seqno": 117698, "table_properties": {"data_size": 2434191, "index_size": 4181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 29850, "raw_average_key_size": 21, "raw_value_size": 2411596, "raw_average_value_size": 1777, "num_data_blocks": 178, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 736, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095898, "oldest_key_time": 1769095898, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 151] Flush lasted 24035 microseconds, and 11438 cpu microseconds.
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.029241) [db/flush_job.cc:967] [default] [JOB 151] Level-0 flush table #236: 2442903 bytes OK
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.029265) [db/memtable_list.cc:519] [default] Level-0 commit table #236 started
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030700) [db/memtable_list.cc:722] [default] Level-0 commit table #236: memtable #1 done
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030721) EVENT_LOG_v1 {"time_micros": 1769096034030714, "job": 151, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030743) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 151] Try to delete WAL files size 3728491, prev total WAL file size 3728491, number of live WAL files 2.
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000232.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.032428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035353338' seq:72057594037927935, type:22 .. '6C6F676D0035373931' seq:0, type:0; will stop at (end)
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 152] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 151 Base level 0, inputs: [236(2385KB)], [234(10MB)]
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034032537, "job": 152, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [236], "files_L6": [234], "score": -1, "input_data_size": 13839543, "oldest_snapshot_seqno": -1}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 152] Generated table #237: 14352 keys, 11969979 bytes, temperature: kUnknown
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034124087, "cf_name": "default", "job": 152, "event": "table_file_creation", "file_number": 237, "file_size": 11969979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11892081, "index_size": 40921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 395202, "raw_average_key_size": 27, "raw_value_size": 11648603, "raw_average_value_size": 811, "num_data_blocks": 1472, "num_entries": 14352, "num_filter_entries": 14352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 237, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.124973) [db/compaction/compaction_job.cc:1663] [default] [JOB 152] Compacted 1@0 + 1@6 files to L6 => 11969979 bytes
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.126990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 15841, records dropped: 1489 output_compression: NoCompression
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.127024) EVENT_LOG_v1 {"time_micros": 1769096034127009, "job": 152, "event": "compaction_finished", "compaction_time_micros": 92043, "compaction_time_cpu_micros": 35204, "output_level": 6, "num_output_files": 1, "total_output_size": 11969979, "num_input_records": 15841, "num_output_records": 14352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034128469, "job": 152, "event": "table_file_deletion", "file_number": 236}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000234.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034133084, "job": 152, "event": "table_file_deletion", "file_number": 234}
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.032315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:54 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:54 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:55 compute-1 podman[254643]: 2026-01-22 15:33:55.088960437 +0000 UTC m=+0.073010386 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 15:33:55 compute-1 ceph-mon[81715]: pgmap v3803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:55 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:55 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:56.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:57 compute-1 ceph-mon[81715]: pgmap v3804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:57 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:33:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:58 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:59 compute-1 ceph-mon[81715]: pgmap v3805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:59 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:59 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:59 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:00.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:01 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:01 compute-1 ceph-mon[81715]: pgmap v3806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:02.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:02 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:02 compute-1 ceph-mon[81715]: pgmap v3807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:02 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:03 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:05 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:05 compute-1 ceph-mon[81715]: pgmap v3808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:05 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:06.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:06.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:07 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:07 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:07 compute-1 ceph-mon[81715]: pgmap v3809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:08.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:08 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:08 compute-1 ceph-mon[81715]: pgmap v3810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:08 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:08 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:08.839 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:34:08 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:08.840 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:34:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:10 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:10 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:10.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:11 compute-1 podman[254662]: 2026-01-22 15:34:11.120998884 +0000 UTC m=+0.113396068 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 15:34:11 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:11.843 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:34:12 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:12 compute-1 ceph-mon[81715]: pgmap v3811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:13 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:13 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:13 compute-1 ceph-mon[81715]: pgmap v3812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:14 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:14 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:15 compute-1 ceph-mon[81715]: 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:15 compute-1 ceph-mon[81715]: pgmap v3813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 15:34:15 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:16.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:17 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:17 compute-1 ceph-mon[81715]: pgmap v3814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 15:34:18 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:18.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:34:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:34:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 15:34:19 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:19 compute-1 ceph-mon[81715]: pgmap v3815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 15:34:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:19 compute-1 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:19 compute-1 ceph-mon[81715]: osdmap e180: 3 total, 3 up, 3 in
Jan 22 15:34:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:20 compute-1 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:34:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:20.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:21 compute-1 ceph-mon[81715]: 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:34:21 compute-1 ceph-mon[81715]: pgmap v3817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 8 op/s
Jan 22 15:34:22 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:23 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:23 compute-1 ceph-mon[81715]: pgmap v3818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 30 op/s
Jan 22 15:34:23 compute-1 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:24 compute-1 ceph-mon[81715]: Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:24 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:25 compute-1 ceph-mon[81715]: pgmap v3819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 22 15:34:25 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:26 compute-1 podman[254689]: 2026-01-22 15:34:26.059462394 +0000 UTC m=+0.055638057 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:34:26 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:27 compute-1 ceph-mon[81715]: pgmap v3820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 15:34:27 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:28 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:28 compute-1 ceph-mon[81715]: pgmap v3821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 15:34:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:29 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:29 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:32 compute-1 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:34:32 compute-1 ceph-mon[81715]: pgmap v3822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 22 15:34:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:32.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:33 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:33 compute-1 ceph-mon[81715]: pgmap v3823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 15:34:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:34:33 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:33 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:34:33 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:34 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:34 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:34 compute-1 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:34:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 50K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                           Cumulative WAL: 16K writes, 6024 syncs, 2.82 writes per sync, written: 0.04 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 701 writes, 1265 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s
                                           Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:34:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:35 compute-1 ceph-mon[81715]: pgmap v3824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 820 KiB/s wr, 27 op/s
Jan 22 15:34:35 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:35 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:37 compute-1 ceph-mon[81715]: pgmap v3825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 682 B/s wr, 18 op/s
Jan 22 15:34:37 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:38 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:38 compute-1 ceph-mon[81715]: pgmap v3826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:38 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:40 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:40 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:34:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:40 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:41 compute-1 ceph-mon[81715]: pgmap v3827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:41 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:42 compute-1 podman[254709]: 2026-01-22 15:34:42.104610443 +0000 UTC m=+0.088650709 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:34:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:34:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:42.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:42 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:42 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:43 compute-1 sudo[254735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:43 compute-1 sudo[254735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 sudo[254735]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-1 sudo[254760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:34:43 compute-1 sudo[254760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 sudo[254760]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-1 sudo[254785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:43 compute-1 sudo[254785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 sudo[254785]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-1 sudo[254810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:34:43 compute-1 sudo[254810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 ceph-mon[81715]: pgmap v3828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:43 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:43 compute-1 sudo[254810]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-1 sudo[254868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:43 compute-1 sudo[254868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 sudo[254868]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-1 sudo[254893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:34:43 compute-1 sudo[254893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-1 sudo[254893]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:44 compute-1 sudo[254918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:44 compute-1 sudo[254918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:44 compute-1 sudo[254918]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:44 compute-1 sudo[254943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 15:34:44 compute-1 sudo[254943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:44 compute-1 sudo[254943]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:34:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:44 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:44 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:34:44 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:44 compute-1 ceph-mon[81715]: pgmap v3829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 597 B/s wr, 8 op/s
Jan 22 15:34:44 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:34:45 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:34:45 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:34:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:46 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:46 compute-1 ceph-mon[81715]: pgmap v3830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 15:34:46 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:34:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:34:47 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #238. Immutable memtables: 0.
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.815216) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 153] Flushing memtable with next log file: 238
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087815248, "job": 153, "event": "flush_started", "num_memtables": 1, "num_entries": 1026, "num_deletes": 346, "total_data_size": 1638939, "memory_usage": 1657928, "flush_reason": "Manual Compaction"}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 153] Level-0 flush table #239: started
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087823408, "cf_name": "default", "job": 153, "event": "table_file_creation", "file_number": 239, "file_size": 1076604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 117703, "largest_seqno": 118724, "table_properties": {"data_size": 1071940, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14126, "raw_average_key_size": 22, "raw_value_size": 1061306, "raw_average_value_size": 1692, "num_data_blocks": 84, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 346, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096034, "oldest_key_time": 1769096034, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 153] Flush lasted 8230 microseconds, and 3650 cpu microseconds.
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.823445) [db/flush_job.cc:967] [default] [JOB 153] Level-0 flush table #239: 1076604 bytes OK
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.823461) [db/memtable_list.cc:519] [default] Level-0 commit table #239 started
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824780) [db/memtable_list.cc:722] [default] Level-0 commit table #239: memtable #1 done
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824828) EVENT_LOG_v1 {"time_micros": 1769096087824817, "job": 153, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 153] Try to delete WAL files size 1633283, prev total WAL file size 1633283, number of live WAL files 2.
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000235.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 154] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 153 Base level 0, inputs: [239(1051KB)], [237(11MB)]
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087825761, "job": 154, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [239], "files_L6": [237], "score": -1, "input_data_size": 13046583, "oldest_snapshot_seqno": -1}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 154] Generated table #240: 14268 keys, 11331039 bytes, temperature: kUnknown
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087920982, "cf_name": "default", "job": 154, "event": "table_file_creation", "file_number": 240, "file_size": 11331039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253991, "index_size": 40263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 393638, "raw_average_key_size": 27, "raw_value_size": 11012139, "raw_average_value_size": 771, "num_data_blocks": 1445, "num_entries": 14268, "num_filter_entries": 14268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 240, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921326) [db/compaction/compaction_job.cc:1663] [default] [JOB 154] Compacted 1@0 + 1@6 files to L6 => 11331039 bytes
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.922383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.8 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(22.6) write-amplify(10.5) OK, records in: 14979, records dropped: 711 output_compression: NoCompression
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.922404) EVENT_LOG_v1 {"time_micros": 1769096087922394, "job": 154, "event": "compaction_finished", "compaction_time_micros": 95382, "compaction_time_cpu_micros": 54677, "output_level": 6, "num_output_files": 1, "total_output_size": 11331039, "num_input_records": 14979, "num_output_records": 14268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000239.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087922758, "job": 154, "event": "table_file_deletion", "file_number": 239}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000237.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087925344, "job": 154, "event": "table_file_deletion", "file_number": 237}
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:48.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:34:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:48 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:48 compute-1 ceph-mon[81715]: pgmap v3831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:48 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:49 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:49 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:50.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:50 compute-1 ceph-mon[81715]: pgmap v3832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:50 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:51 compute-1 sudo[254986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:51 compute-1 sudo[254986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:51 compute-1 sudo[254986]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:51 compute-1 sudo[255011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:34:51 compute-1 sudo[255011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:51 compute-1 sudo[255011]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:51 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:51 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:52.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:52.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:53 compute-1 ceph-mon[81715]: pgmap v3833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:53 compute-1 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:54 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:54 compute-1 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:54.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:55 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:55 compute-1 ceph-mon[81715]: pgmap v3834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:55 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:56 compute-1 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:57 compute-1 podman[255036]: 2026-01-22 15:34:57.055787893 +0000 UTC m=+0.043929829 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:34:57 compute-1 ceph-mon[81715]: pgmap v3835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:57 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:34:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:58 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:58 compute-1 ceph-mon[81715]: pgmap v3836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:59 compute-1 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:59 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:00 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:00 compute-1 ceph-mon[81715]: pgmap v3837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:01 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:02 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:02 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:02 compute-1 ceph-mon[81715]: pgmap v3838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:04 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:04 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:05 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:05 compute-1 ceph-mon[81715]: pgmap v3839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:05 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:06 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:06.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:06 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:07 compute-1 ceph-mon[81715]: pgmap v3840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:07 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-1 ceph-mon[81715]: pgmap v3841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:08 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:09 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:09 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:10 compute-1 ceph-mon[81715]: pgmap v3842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:10 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:10 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:11 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:12 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:13 compute-1 podman[255056]: 2026-01-22 15:35:13.133808873 +0000 UTC m=+0.108745422 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 22 15:35:13 compute-1 ceph-mon[81715]: pgmap v3843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:13 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:14 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:14 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:14 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:14.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:15 compute-1 ceph-mon[81715]: pgmap v3844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:15 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:16 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:17 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:17 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:18 compute-1 ceph-mon[81715]: pgmap v3845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:18 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:18.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:19 compute-1 ceph-mon[81715]: pgmap v3846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:19 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:35:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:35:19 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:20 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:21 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-1 ceph-mon[81715]: pgmap v3847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:22 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:22.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:23 compute-1 ceph-mon[81715]: pgmap v3848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:23 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:24 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:24 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:25 compute-1 ceph-mon[81715]: pgmap v3849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:25 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:25 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:26 compute-1 ceph-mon[81715]: pgmap v3850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:26 compute-1 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:26.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:27 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:28 compute-1 podman[255084]: 2026-01-22 15:35:28.088950359 +0000 UTC m=+0.068729690 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 15:35:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:28 compute-1 ceph-mon[81715]: pgmap v3851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:28 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:29 compute-1 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:29 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:30 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:30 compute-1 ceph-mon[81715]: pgmap v3852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:30 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:35:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 21K writes, 119K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1754 writes, 10K keys, 1754 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s
                                           Interval WAL: 1754 writes, 1754 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     77.5      1.64              0.42        77    0.021       0      0       0.0       0.0
                                             L6      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    136.8    118.5      6.34              2.34        76    0.083    834K    46K       0.0       0.0
                                            Sum      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.9      0.1       0.0   6.9    108.7    110.1      7.97              2.77       153    0.052    834K    46K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     70.4     71.1      1.10              0.27        12    0.092     91K   5756       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    136.8    118.5      6.34              2.34        76    0.083    834K    46K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     77.6      1.63              0.42        76    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.124, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.86 GB write, 0.12 MB/s write, 0.85 GB read, 0.12 MB/s read, 8.0 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 88.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.00041 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4591,83.15 MB,27.3512%) FilterBlock(153,2.27 MB,0.74629%) IndexBlock(153,2.77 MB,0.909996%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:35:32 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:32 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:34 compute-1 ceph-mon[81715]: pgmap v3853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:34 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:34.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:35 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:35 compute-1 ceph-mon[81715]: pgmap v3854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:35 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:35 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:35 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:36.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:36 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:36 compute-1 ceph-mon[81715]: pgmap v3855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:36 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:37 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:38.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:38 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:38.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:38 compute-1 ceph-mon[81715]: pgmap v3856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:38 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:39 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:39 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:40 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:40 compute-1 ceph-mon[81715]: pgmap v3857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:40 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:42 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:42 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:43 compute-1 ceph-mon[81715]: pgmap v3858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:43 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:43 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:44 compute-1 podman[255104]: 2026-01-22 15:35:44.111422646 +0000 UTC m=+0.095133153 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 15:35:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:44.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:44 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:44.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:44 compute-1 ceph-mon[81715]: pgmap v3859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:44 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:44 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:45 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:46.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:46 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:46.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:46 compute-1 ceph-mon[81715]: pgmap v3860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:46 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.536 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:35:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.536 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:35:48 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:48 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:49 compute-1 ceph-mon[81715]: pgmap v3861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:49 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:50 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:50 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:50 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:51 compute-1 ceph-mon[81715]: pgmap v3862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:51 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:51 compute-1 sudo[255130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:51 compute-1 sudo[255130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-1 sudo[255130]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-1 sudo[255155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:35:51 compute-1 sudo[255155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-1 sudo[255155]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-1 sudo[255180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:51 compute-1 sudo[255180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-1 sudo[255180]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-1 sudo[255205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:35:51 compute-1 sudo[255205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-1 sudo[255205]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:52 compute-1 sudo[255260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:52 compute-1 sudo[255260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-1 sudo[255260]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-1 sudo[255285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:35:52 compute-1 sudo[255285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-1 sudo[255285]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-1 sudo[255310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:52 compute-1 sudo[255310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-1 sudo[255310]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-1 sudo[255335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 15:35:52 compute-1 sudo[255335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:52 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:52 compute-1 podman[255400]: 2026-01-22 15:35:52.948509844 +0000 UTC m=+0.049887250 container create 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 15:35:52 compute-1 systemd[1]: Started libpod-conmon-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope.
Jan 22 15:35:53 compute-1 systemd[1]: Started libcrun container.
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:52.930917289 +0000 UTC m=+0.032294725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:53.027502211 +0000 UTC m=+0.128879637 container init 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:53.040270857 +0000 UTC m=+0.141648273 container start 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:53.044512771 +0000 UTC m=+0.145890217 container attach 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 15:35:53 compute-1 systemd[1]: libpod-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope: Deactivated successfully.
Jan 22 15:35:53 compute-1 fervent_beaver[255416]: 167 167
Jan 22 15:35:53 compute-1 conmon[255416]: conmon 3bfcc6a59f47aaa43c01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope/container/memory.events
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:53.052207829 +0000 UTC m=+0.153585285 container died 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 15:35:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-3ad58e0a8fb98a7564e59262c086159244ad1eae06b5586206a5b4b26d323fe8-merged.mount: Deactivated successfully.
Jan 22 15:35:53 compute-1 podman[255400]: 2026-01-22 15:35:53.09843554 +0000 UTC m=+0.199812956 container remove 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 15:35:53 compute-1 systemd[1]: libpod-conmon-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope: Deactivated successfully.
Jan 22 15:35:53 compute-1 ceph-mon[81715]: pgmap v3863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:53 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:53 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:53 compute-1 podman[255440]: 2026-01-22 15:35:53.321442862 +0000 UTC m=+0.056670184 container create db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 15:35:53 compute-1 systemd[1]: Started libpod-conmon-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope.
Jan 22 15:35:53 compute-1 podman[255440]: 2026-01-22 15:35:53.29475653 +0000 UTC m=+0.029983952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:35:53 compute-1 systemd[1]: Started libcrun container.
Jan 22 15:35:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 15:35:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 15:35:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 15:35:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 15:35:53 compute-1 podman[255440]: 2026-01-22 15:35:53.437843571 +0000 UTC m=+0.173070943 container init db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:35:53 compute-1 podman[255440]: 2026-01-22 15:35:53.445260841 +0000 UTC m=+0.180488203 container start db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 15:35:53 compute-1 podman[255440]: 2026-01-22 15:35:53.449054974 +0000 UTC m=+0.184282346 container attach db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:35:54 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:54 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]: [
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:     {
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "available": false,
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "ceph_device": false,
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "lsm_data": {},
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "lvs": [],
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "path": "/dev/sr0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "rejected_reasons": [
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "Insufficient space (<5GB)",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "Has a FileSystem"
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         ],
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         "sys_api": {
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "actuators": null,
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "device_nodes": "sr0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "devname": "sr0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "human_readable_size": "482.00 KB",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "id_bus": "ata",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "model": "QEMU DVD-ROM",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "nr_requests": "2",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "parent": "/dev/sr0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "partitions": {},
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "path": "/dev/sr0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "removable": "1",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "rev": "2.5+",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "ro": "0",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "rotational": "1",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "sas_address": "",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "sas_device_handle": "",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "scheduler_mode": "mq-deadline",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "sectors": 0,
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "sectorsize": "2048",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "size": 493568.0,
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "support_discard": "2048",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "type": "disk",
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:             "vendor": "QEMU"
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:         }
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]:     }
Jan 22 15:35:54 compute-1 flamboyant_colden[255457]: ]
Jan 22 15:35:54 compute-1 systemd[1]: libpod-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Deactivated successfully.
Jan 22 15:35:54 compute-1 podman[255440]: 2026-01-22 15:35:54.652616259 +0000 UTC m=+1.387843581 container died db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 15:35:54 compute-1 systemd[1]: libpod-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Consumed 1.233s CPU time.
Jan 22 15:35:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8-merged.mount: Deactivated successfully.
Jan 22 15:35:54 compute-1 podman[255440]: 2026-01-22 15:35:54.710405023 +0000 UTC m=+1.445632345 container remove db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:35:54 compute-1 systemd[1]: libpod-conmon-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Deactivated successfully.
Jan 22 15:35:54 compute-1 sudo[255335]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:55 compute-1 ceph-mon[81715]: pgmap v3864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:55 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:35:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:35:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:57 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:58 compute-1 ceph-mon[81715]: pgmap v3865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:58 compute-1 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:58 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:35:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:35:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:58 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:59 compute-1 ceph-mon[81715]: pgmap v3866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:59 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:59 compute-1 podman[256727]: 2026-01-22 15:35:59.085355722 +0000 UTC m=+0.069886981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:35:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:00 compute-1 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:00 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:00 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:00 compute-1 sudo[256747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:00 compute-1 sudo[256747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:00 compute-1 sudo[256747]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:00 compute-1 sudo[256772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:36:00 compute-1 sudo[256772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:00 compute-1 sudo[256772]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:01 compute-1 ceph-mon[81715]: pgmap v3867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:01 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:36:01 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:36:02 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:02 compute-1 ceph-mon[81715]: pgmap v3868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:02 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:02 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:03 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:04 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:05 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:05 compute-1 ceph-mon[81715]: pgmap v3869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:05 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:06.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:07 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:07 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:07 compute-1 ceph-mon[81715]: pgmap v3870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:08 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:08.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:08.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:09 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:09 compute-1 ceph-mon[81715]: pgmap v3871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:09 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:10 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:10 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:11 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:11 compute-1 ceph-mon[81715]: pgmap v3872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:11 compute-1 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:36:12 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:12 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:12 compute-1 ceph-mon[81715]: pgmap v3873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:12 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:13 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:14 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:14 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:14 compute-1 ceph-mon[81715]: pgmap v3874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:14 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:15 compute-1 podman[256797]: 2026-01-22 15:36:15.429478282 +0000 UTC m=+0.176368792 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:36:15 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:16 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:16 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:16 compute-1 ceph-mon[81715]: pgmap v3875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:17 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:36:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:36:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:36:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:36:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:18 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:18 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:18 compute-1 ceph-mon[81715]: pgmap v3876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:36:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:36:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:19 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:19 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:20.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:20 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:20 compute-1 ceph-mon[81715]: pgmap v3877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:22 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:22 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:22.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:23 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:23 compute-1 ceph-mon[81715]: pgmap v3878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:24 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:25 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:25 compute-1 ceph-mon[81715]: pgmap v3879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:25 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:25 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:26 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:26 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:28 compute-1 ceph-mon[81715]: pgmap v3880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 15:36:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:28 compute-1 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:29 compute-1 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:29 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:29 compute-1 ceph-mon[81715]: pgmap v3881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:29 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:30 compute-1 podman[256823]: 2026-01-22 15:36:30.128518112 +0000 UTC m=+0.113476120 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:36:30 compute-1 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:30 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:30.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:31 compute-1 ceph-mon[81715]: pgmap v3882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:31 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:32.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:33 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:33 compute-1 ceph-mon[81715]: pgmap v3883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:34 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:34 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:34 compute-1 ceph-mon[81715]: pgmap v3884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:34 compute-1 ceph-mon[81715]: Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:36 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:36.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:37 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:37 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:37 compute-1 ceph-mon[81715]: pgmap v3885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:38 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:39 compute-1 ceph-mon[81715]: pgmap v3886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:39 compute-1 ceph-mon[81715]: Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:40.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:40 compute-1 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:40 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:40 compute-1 ceph-mon[81715]: pgmap v3887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:42 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:43 compute-1 ceph-mon[81715]: pgmap v3888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:43 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:44 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:44 compute-1 ceph-mon[81715]: pgmap v3889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:44 compute-1 ceph-mon[81715]: Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:44.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:45 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:45 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:46 compute-1 podman[256845]: 2026-01-22 15:36:46.115385956 +0000 UTC m=+0.096553583 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 15:36:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:46.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:47 compute-1 ceph-mon[81715]: pgmap v3890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:47 compute-1 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:36:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:36:48 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:49 compute-1 ceph-mon[81715]: pgmap v3891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:49 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:50 compute-1 ceph-mon[81715]: Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:50 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:51 compute-1 ceph-mon[81715]: pgmap v3892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:51 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:52 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:53 compute-1 ceph-mon[81715]: pgmap v3893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:53 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:54 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:54 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:55 compute-1 ceph-mon[81715]: pgmap v3894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:55 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:56 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:57 compute-1 ceph-mon[81715]: pgmap v3895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:57 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:58 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:58.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:36:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:59 compute-1 ceph-mon[81715]: pgmap v3896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:59 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:59 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:00 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:01 compute-1 sudo[256884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:01 compute-1 sudo[256884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-1 sudo[256884]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-1 podman[256873]: 2026-01-22 15:37:01.118463739 +0000 UTC m=+0.100504829 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 15:37:01 compute-1 sudo[256917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:37:01 compute-1 sudo[256917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-1 sudo[256917]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-1 sudo[256942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:01 compute-1 sudo[256942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-1 sudo[256942]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-1 sudo[256967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:37:01 compute-1 sudo[256967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-1 sudo[256967]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:01 compute-1 ceph-mon[81715]: pgmap v3897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Jan 22 15:37:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:02 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:02 compute-1 ceph-mon[81715]: pgmap v3898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 22 15:37:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:03 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:04 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:04 compute-1 ceph-mon[81715]: pgmap v3899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 22 15:37:04 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:04 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:37:04 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:37:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:06 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:07 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:07 compute-1 ceph-mon[81715]: pgmap v3900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 15:37:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:08 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:09 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:09 compute-1 ceph-mon[81715]: pgmap v3901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 15:37:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:10 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:10 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:10 compute-1 sudo[257022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:10 compute-1 sudo[257022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:10 compute-1 sudo[257022]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:10 compute-1 sudo[257047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:37:10 compute-1 sudo[257047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:10 compute-1 sudo[257047]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:11 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:11 compute-1 ceph-mon[81715]: pgmap v3902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 0 B/s wr, 195 op/s
Jan 22 15:37:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:11 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:12 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:13 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:13 compute-1 ceph-mon[81715]: pgmap v3903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 22 15:37:14 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:14 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:15 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:15 compute-1 ceph-mon[81715]: pgmap v3904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 22 15:37:16 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 15:37:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 15:37:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:17.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:17 compute-1 podman[257073]: 2026-01-22 15:37:17.157054112 +0000 UTC m=+0.142934847 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Jan 22 15:37:17 compute-1 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:17 compute-1 ceph-mon[81715]: pgmap v3905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s
Jan 22 15:37:18 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:37:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:37:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:18 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:37:18 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:37:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:19.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:19 compute-1 ceph-mon[81715]: pgmap v3906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 15:37:19 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:37:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:37:19 compute-1 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:20 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:20 compute-1 ceph-mon[81715]: pgmap v3907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 15:37:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:21.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:21 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:22 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:22 compute-1 ceph-mon[81715]: pgmap v3908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 22 15:37:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:23.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:23 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:24 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:24 compute-1 ceph-mon[81715]: pgmap v3909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:24 compute-1 ceph-mon[81715]: Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:25 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:27.049 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:37:27 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:27.050 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:37:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:28 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:28 compute-1 ceph-mon[81715]: pgmap v3910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:29 compute-1 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:29 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:29 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:29 compute-1 ceph-mon[81715]: pgmap v3911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:30 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:30 compute-1 ceph-mon[81715]: Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:31 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:31 compute-1 ceph-mon[81715]: pgmap v3912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:32 compute-1 podman[257100]: 2026-01-22 15:37:32.097992147 +0000 UTC m=+0.072873732 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 15:37:32 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:33 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:33 compute-1 ceph-mon[81715]: pgmap v3913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:34 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:34.052 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:37:34 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:34 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:35.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:35 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:35 compute-1 ceph-mon[81715]: pgmap v3914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:36 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:37 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:37 compute-1 ceph-mon[81715]: pgmap v3915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:38 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #241. Immutable memtables: 0.
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.470892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 155] Flushing memtable with next log file: 241
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258470952, "job": 155, "event": "flush_started", "num_memtables": 1, "num_entries": 2744, "num_deletes": 540, "total_data_size": 5170603, "memory_usage": 5230480, "flush_reason": "Manual Compaction"}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 155] Level-0 flush table #242: started
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258494595, "cf_name": "default", "job": 155, "event": "table_file_creation", "file_number": 242, "file_size": 3350889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 118729, "largest_seqno": 121468, "table_properties": {"data_size": 3340601, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32349, "raw_average_key_size": 23, "raw_value_size": 3316033, "raw_average_value_size": 2397, "num_data_blocks": 239, "num_entries": 1383, "num_filter_entries": 1383, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096088, "oldest_key_time": 1769096088, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 242, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 155] Flush lasted 23781 microseconds, and 12020 cpu microseconds.
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494644) [db/flush_job.cc:967] [default] [JOB 155] Level-0 flush table #242: 3350889 bytes OK
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494703) [db/memtable_list.cc:519] [default] Level-0 commit table #242 started
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495886) [db/memtable_list.cc:722] [default] Level-0 commit table #242: memtable #1 done
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495897) EVENT_LOG_v1 {"time_micros": 1769096258495893, "job": 155, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 155] Try to delete WAL files size 5157083, prev total WAL file size 5157083, number of live WAL files 2.
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000238.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.497224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 156] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 155 Base level 0, inputs: [242(3272KB)], [240(10MB)]
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258497279, "job": 156, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [242], "files_L6": [240], "score": -1, "input_data_size": 14681928, "oldest_snapshot_seqno": -1}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 156] Generated table #243: 14554 keys, 12846152 bytes, temperature: kUnknown
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258592808, "cf_name": "default", "job": 156, "event": "table_file_creation", "file_number": 243, "file_size": 12846152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12765812, "index_size": 42851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 399045, "raw_average_key_size": 27, "raw_value_size": 12517844, "raw_average_value_size": 860, "num_data_blocks": 1556, "num_entries": 14554, "num_filter_entries": 14554, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 243, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.593101) [db/compaction/compaction_job.cc:1663] [default] [JOB 156] Compacted 1@0 + 1@6 files to L6 => 12846152 bytes
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.594771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 15651, records dropped: 1097 output_compression: NoCompression
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.594793) EVENT_LOG_v1 {"time_micros": 1769096258594782, "job": 156, "event": "compaction_finished", "compaction_time_micros": 95607, "compaction_time_cpu_micros": 39342, "output_level": 6, "num_output_files": 1, "total_output_size": 12846152, "num_input_records": 15651, "num_output_records": 14554, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000242.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258595584, "job": 156, "event": "table_file_deletion", "file_number": 242}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000240.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258598207, "job": 156, "event": "table_file_deletion", "file_number": 240}
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.497123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:39 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:39 compute-1 ceph-mon[81715]: pgmap v3916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:39 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:40 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:41 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:41 compute-1 ceph-mon[81715]: pgmap v3917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:43 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-1 ceph-mon[81715]: pgmap v3918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:45 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:45 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:45 compute-1 ceph-mon[81715]: pgmap v3919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:45 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:46 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:47 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:47 compute-1 ceph-mon[81715]: pgmap v3920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:47 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.539 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:37:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:37:48 compute-1 podman[257119]: 2026-01-22 15:37:48.134012703 +0000 UTC m=+0.122720470 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:37:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:49 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:49 compute-1 ceph-mon[81715]: pgmap v3921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:49 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:50 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:50.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:51 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:51 compute-1 ceph-mon[81715]: pgmap v3922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:52 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:53 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:53 compute-1 ceph-mon[81715]: pgmap v3923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:54 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:54 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:56 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:56 compute-1 ceph-mon[81715]: pgmap v3924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:56.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:57 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:57 compute-1 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:57 compute-1 ceph-mon[81715]: pgmap v3925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:37:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:37:59 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:37:59 compute-1 ceph-mon[81715]: pgmap v3926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:37:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:00 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:00 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:00 compute-1 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 15:38:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:01.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:01 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:01 compute-1 ceph-mon[81715]: pgmap v3927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.0 MiB/s wr, 7 op/s
Jan 22 15:38:01 compute-1 ceph-mon[81715]: osdmap e181: 3 total, 3 up, 3 in
Jan 22 15:38:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:02.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:02 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 15:38:02 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:02 compute-1 ceph-mon[81715]: pgmap v3929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 22 15:38:03 compute-1 podman[257145]: 2026-01-22 15:38:03.094034073 +0000 UTC m=+0.074094207 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:38:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:03.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:04 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:04 compute-1 ceph-mon[81715]: osdmap e182: 3 total, 3 up, 3 in
Jan 22 15:38:04 compute-1 ceph-mon[81715]: 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:38:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:05.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:05 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:05 compute-1 ceph-mon[81715]: pgmap v3931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 902 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 22 15:38:05 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:06 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:06 compute-1 ceph-mon[81715]: pgmap v3932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.6 MiB/s wr, 50 op/s
Jan 22 15:38:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:07.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:07 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:09 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:09 compute-1 ceph-mon[81715]: pgmap v3933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.0 MiB/s wr, 38 op/s
Jan 22 15:38:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 15:38:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:09 compute-1 sshd-session[257165]: Invalid user admin from 45.148.10.121 port 47624
Jan 22 15:38:09 compute-1 sshd-session[257165]: Connection closed by invalid user admin 45.148.10.121 port 47624 [preauth]
Jan 22 15:38:10 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:10 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:10 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:10 compute-1 ceph-mon[81715]: osdmap e183: 3 total, 3 up, 3 in
Jan 22 15:38:10 compute-1 sudo[257167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:10 compute-1 sudo[257167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-1 sudo[257167]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-1 sudo[257192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:38:10 compute-1 sudo[257192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-1 sudo[257192]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:10 compute-1 sudo[257217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:10 compute-1 sudo[257217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-1 sudo[257217]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-1 sudo[257242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:38:10 compute-1 sudo[257242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:11.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:11 compute-1 podman[257338]: 2026-01-22 15:38:11.362862379 +0000 UTC m=+0.057077915 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 15:38:11 compute-1 podman[257338]: 2026-01-22 15:38:11.50228236 +0000 UTC m=+0.196497906 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 15:38:11 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:11 compute-1 ceph-mon[81715]: pgmap v3935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Jan 22 15:38:11 compute-1 sudo[257242]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:12 compute-1 sudo[257460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:12 compute-1 sudo[257460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:12 compute-1 sudo[257460]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:12 compute-1 sudo[257485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:38:12 compute-1 sudo[257485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:12 compute-1 sudo[257485]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:12 compute-1 sudo[257510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:12 compute-1 sudo[257510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:12 compute-1 sudo[257510]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:12 compute-1 sudo[257535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:38:12 compute-1 sudo[257535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:12 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:12 compute-1 ceph-mon[81715]: pgmap v3936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 879 KiB/s wr, 36 op/s
Jan 22 15:38:12 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:13 compute-1 sudo[257535]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:13.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:13 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:38:13 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:38:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:14.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:15 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:15 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:15 compute-1 ceph-mon[81715]: pgmap v3937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 22 15:38:15 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:16 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:17 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:17 compute-1 ceph-mon[81715]: pgmap v3938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 15:38:18 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:18 compute-1 ceph-mon[81715]: pgmap v3939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 15:38:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:19 compute-1 podman[257593]: 2026-01-22 15:38:19.131451064 +0000 UTC m=+0.110572282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 15:38:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:19 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:19 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:20 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:20 compute-1 ceph-mon[81715]: pgmap v3940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:21 compute-1 sudo[257620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:21 compute-1 sudo[257620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:21 compute-1 sudo[257620]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:21.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:21 compute-1 sudo[257645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:38:21 compute-1 sudo[257645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:21 compute-1 sudo[257645]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:21 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:21 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:22 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:22 compute-1 ceph-mon[81715]: pgmap v3941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:23.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:23 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:24 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:24 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:24 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:24 compute-1 ceph-mon[81715]: pgmap v3942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:24 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:25.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:25 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:26.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:26 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:26 compute-1 ceph-mon[81715]: pgmap v3943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:27 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:29 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:29 compute-1 ceph-mon[81715]: pgmap v3944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:29.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:29 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:30 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:30 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:30.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:31 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:31 compute-1 ceph-mon[81715]: pgmap v3945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:31.238 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:38:31 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:31.240 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:38:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:31.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:32 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:32.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:33.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:33 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:33 compute-1 ceph-mon[81715]: pgmap v3946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:34 compute-1 podman[257670]: 2026-01-22 15:38:34.085433439 +0000 UTC m=+0.064129894 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:38:34 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:34 compute-1 ceph-mon[81715]: pgmap v3947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:34 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:34 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:34.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:35 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:37 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:37 compute-1 ceph-mon[81715]: pgmap v3948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:37 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:37.242 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:38:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:38 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:38 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:39 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:39 compute-1 ceph-mon[81715]: pgmap v3949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:40 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:40 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:41 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:41 compute-1 ceph-mon[81715]: pgmap v3950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:42 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:42.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:43.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:43 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:43 compute-1 ceph-mon[81715]: pgmap v3951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:44 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:44 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:45.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:45 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:45 compute-1 ceph-mon[81715]: pgmap v3952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:46 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:46 compute-1 ceph-mon[81715]: pgmap v3953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:46.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:38:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:38:47 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:48.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:49 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:49 compute-1 ceph-mon[81715]: pgmap v3954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #244. Immutable memtables: 0.
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.718511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 157] Flushing memtable with next log file: 244
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329718543, "job": 157, "event": "flush_started", "num_memtables": 1, "num_entries": 1301, "num_deletes": 380, "total_data_size": 2116724, "memory_usage": 2160112, "flush_reason": "Manual Compaction"}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 157] Level-0 flush table #245: started
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329728944, "cf_name": "default", "job": 157, "event": "table_file_creation", "file_number": 245, "file_size": 1389488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 121473, "largest_seqno": 122769, "table_properties": {"data_size": 1384040, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16488, "raw_average_key_size": 22, "raw_value_size": 1371300, "raw_average_value_size": 1838, "num_data_blocks": 104, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 380, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096259, "oldest_key_time": 1769096259, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 245, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 157] Flush lasted 10469 microseconds, and 4427 cpu microseconds.
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.728982) [db/flush_job.cc:967] [default] [JOB 157] Level-0 flush table #245: 1389488 bytes OK
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.728998) [db/memtable_list.cc:519] [default] Level-0 commit table #245 started
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.729973) [db/memtable_list.cc:722] [default] Level-0 commit table #245: memtable #1 done
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.729986) EVENT_LOG_v1 {"time_micros": 1769096329729982, "job": 157, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 157] Try to delete WAL files size 2109829, prev total WAL file size 2109829, number of live WAL files 2.
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000241.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035373930' seq:72057594037927935, type:22 .. '6C6F676D0036303433' seq:0, type:0; will stop at (end)
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 158] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 157 Base level 0, inputs: [245(1356KB)], [243(12MB)]
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329730796, "job": 158, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [245], "files_L6": [243], "score": -1, "input_data_size": 14235640, "oldest_snapshot_seqno": -1}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 158] Generated table #246: 14521 keys, 14041153 bytes, temperature: kUnknown
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329828277, "cf_name": "default", "job": 158, "event": "table_file_creation", "file_number": 246, "file_size": 14041153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959442, "index_size": 44286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 398934, "raw_average_key_size": 27, "raw_value_size": 13710447, "raw_average_value_size": 944, "num_data_blocks": 1614, "num_entries": 14521, "num_filter_entries": 14521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 246, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.828500) [db/compaction/compaction_job.cc:1663] [default] [JOB 158] Compacted 1@0 + 1@6 files to L6 => 14041153 bytes
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.829576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 15300, records dropped: 779 output_compression: NoCompression
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.829591) EVENT_LOG_v1 {"time_micros": 1769096329829584, "job": 158, "event": "compaction_finished", "compaction_time_micros": 97540, "compaction_time_cpu_micros": 63720, "output_level": 6, "num_output_files": 1, "total_output_size": 14041153, "num_input_records": 15300, "num_output_records": 14521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000245.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329829908, "job": 158, "event": "table_file_deletion", "file_number": 245}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000243.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329832245, "job": 158, "event": "table_file_deletion", "file_number": 243}
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.832317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.832326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:50 compute-1 podman[257689]: 2026-01-22 15:38:50.100479778 +0000 UTC m=+0.085593366 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 15:38:50 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:50 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:50 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:50.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:38:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:38:51 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:51 compute-1 ceph-mon[81715]: pgmap v3955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:52 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:52 compute-1 ceph-mon[81715]: pgmap v3956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:52.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:53 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:54.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:55 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:55 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:55 compute-1 ceph-mon[81715]: pgmap v3957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:55 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:55.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:56 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:57.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:57 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:57 compute-1 ceph-mon[81715]: pgmap v3958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:38:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:59 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-1 ceph-mon[81715]: pgmap v3959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:59 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:00 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:00 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:00 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:39:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:00.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:39:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:01 compute-1 ceph-mon[81715]: pgmap v3960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:02 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:02 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:03.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:03 compute-1 ceph-mon[81715]: pgmap v3961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:03 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:04 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:04 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:05 compute-1 podman[257715]: 2026-01-22 15:39:05.053342703 +0000 UTC m=+0.046969692 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 15:39:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:05 compute-1 ceph-mon[81715]: pgmap v3962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:05 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:05 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:06 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:06 compute-1 ceph-mon[81715]: pgmap v3963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:06.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:07 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:08.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:09 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:09 compute-1 ceph-mon[81715]: pgmap v3964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:09 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:09.916 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:39:09 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:09.917 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:39:10 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:10 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:10 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:39:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:10.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:39:11 compute-1 ceph-mon[81715]: pgmap v3965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:11 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:11.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:12 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:12 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:12 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:12 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:12.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:13 compute-1 ceph-mon[81715]: pgmap v3966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:13 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:14 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:14 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:14 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:14 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:14 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:14.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:15 compute-1 ceph-mon[81715]: pgmap v3967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:15 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:15 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:15 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:15.919 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:39:16 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:16 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:16 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:17 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:18 compute-1 ceph-mon[81715]: pgmap v3968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:18 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:18 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:18 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:19.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:19 compute-1 ceph-mon[81715]: pgmap v3969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:39:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:39:19 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:19 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:20 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:20 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:20 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:21 compute-1 podman[257736]: 2026-01-22 15:39:21.115489925 +0000 UTC m=+0.112513545 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 15:39:21 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:21 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:21 compute-1 ceph-mon[81715]: pgmap v3970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:21 compute-1 sudo[257762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:21 compute-1 sudo[257762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-1 sudo[257762]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-1 sudo[257787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:39:21 compute-1 sudo[257787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-1 sudo[257787]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-1 sudo[257812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:21 compute-1 sudo[257812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-1 sudo[257812]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-1 sudo[257837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:39:21 compute-1 sudo[257837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:22 compute-1 sudo[257837]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:22 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:22 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:22 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:22 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:39:22 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:22.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:39:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:23 compute-1 ceph-mon[81715]: pgmap v3971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:23 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:24 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:24 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:24 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:25 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:25 compute-1 ceph-mon[81715]: pgmap v3972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:25 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:26 compute-1 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:26 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:26 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:26 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:26 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:26 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:28 compute-1 ceph-mon[81715]: pgmap v3973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:39:28 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:39:28 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:39:28 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:28 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:28 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:28.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:29 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:29 compute-1 ceph-mon[81715]: pgmap v3974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:30 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:30 compute-1 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:30 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:30 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:30 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:30.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:31 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:31 compute-1 ceph-mon[81715]: pgmap v3975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:32 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:32 compute-1 ceph-mon[81715]: pgmap v3976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:32 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:32 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:32 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:32.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:33 compute-1 sudo[257893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:33 compute-1 sudo[257893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:33 compute-1 sudo[257893]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:33 compute-1 sudo[257918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:39:33 compute-1 sudo[257918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:33 compute-1 sudo[257918]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:34 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:34 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:34 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:34 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:34 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:36 compute-1 podman[257943]: 2026-01-22 15:39:36.06654735 +0000 UTC m=+0.061516895 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:39:36 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:36 compute-1 ceph-mon[81715]: pgmap v3977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:36 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:36 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:36 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:36 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:37 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:37 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:37 compute-1 ceph-mon[81715]: pgmap v3978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:38 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:38 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:38 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:39:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:39:39 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:39 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:39 compute-1 ceph-mon[81715]: pgmap v3979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:40 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:40 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:40 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:40 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:40 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:40.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:41 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:41 compute-1 ceph-mon[81715]: pgmap v3980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:42 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:42 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:42 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:42 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:42.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:43 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:43 compute-1 ceph-mon[81715]: pgmap v3981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:44 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:44 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:44 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:44 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:45 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:45 compute-1 ceph-mon[81715]: pgmap v3982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:45 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:46 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:46 compute-1 ceph-mon[81715]: pgmap v3983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:46 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:46 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:46 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:39:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:39:47 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:48 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:48 compute-1 ceph-mon[81715]: pgmap v3984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:48 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:48 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:48 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:49 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:50 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:50 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-1 ceph-mon[81715]: pgmap v3985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:50 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:50 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:50 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:51 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:52 compute-1 podman[257962]: 2026-01-22 15:39:52.107578919 +0000 UTC m=+0.090335616 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:39:52 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:52 compute-1 ceph-mon[81715]: pgmap v3986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:52 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:52 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:52 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:53 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:54 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:54 compute-1 ceph-mon[81715]: pgmap v3987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:54 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:54 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:54 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:55 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:55 compute-1 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:56 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:56 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:56 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:56 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:56 compute-1 ceph-mon[81715]: pgmap v3988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:57 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:57 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:58 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:58 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:58 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:59 compute-1 ceph-mon[81715]: pgmap v3989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:59 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:39:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:00 compute-1 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:00 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:00 compute-1 ceph-mon[81715]: Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 15:40:00 compute-1 ceph-mon[81715]: [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 15:40:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:00 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:00 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:40:00 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:40:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:02 compute-1 ceph-mon[81715]: pgmap v3990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:02 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:02 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:02 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:02 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:03 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:03 compute-1 ceph-mon[81715]: pgmap v3991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:03 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:04 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #247. Immutable memtables: 0.
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.829196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 159] Flushing memtable with next log file: 247
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404829249, "job": 159, "event": "flush_started", "num_memtables": 1, "num_entries": 1346, "num_deletes": 384, "total_data_size": 2279152, "memory_usage": 2306344, "flush_reason": "Manual Compaction"}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 159] Level-0 flush table #248: started
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404837938, "cf_name": "default", "job": 159, "event": "table_file_creation", "file_number": 248, "file_size": 989946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 122774, "largest_seqno": 124115, "table_properties": {"data_size": 985175, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16963, "raw_average_key_size": 23, "raw_value_size": 973300, "raw_average_value_size": 1333, "num_data_blocks": 77, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 384, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096330, "oldest_key_time": 1769096330, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 248, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 159] Flush lasted 8761 microseconds, and 4369 cpu microseconds.
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837973) [db/flush_job.cc:967] [default] [JOB 159] Level-0 flush table #248: 989946 bytes OK
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837989) [db/memtable_list.cc:519] [default] Level-0 commit table #248 started
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839181) [db/memtable_list.cc:722] [default] Level-0 commit table #248: memtable #1 done
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839196) EVENT_LOG_v1 {"time_micros": 1769096404839192, "job": 159, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 159] Try to delete WAL files size 2272058, prev total WAL file size 2272058, number of live WAL files 2.
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000244.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.840193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 160] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 159 Base level 0, inputs: [248(966KB)], [246(13MB)]
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404840305, "job": 160, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [248], "files_L6": [246], "score": -1, "input_data_size": 15031099, "oldest_snapshot_seqno": -1}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 160] Generated table #249: 14500 keys, 11547156 bytes, temperature: kUnknown
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404926893, "cf_name": "default", "job": 160, "event": "table_file_creation", "file_number": 249, "file_size": 11547156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11469209, "index_size": 40586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 398265, "raw_average_key_size": 27, "raw_value_size": 11224120, "raw_average_value_size": 774, "num_data_blocks": 1460, "num_entries": 14500, "num_filter_entries": 14500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 249, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.927228) [db/compaction/compaction_job.cc:1663] [default] [JOB 160] Compacted 1@0 + 1@6 files to L6 => 11547156 bytes
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.928799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(26.8) write-amplify(11.7) OK, records in: 15251, records dropped: 751 output_compression: NoCompression
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.928825) EVENT_LOG_v1 {"time_micros": 1769096404928813, "job": 160, "event": "compaction_finished", "compaction_time_micros": 86678, "compaction_time_cpu_micros": 35825, "output_level": 6, "num_output_files": 1, "total_output_size": 11547156, "num_input_records": 15251, "num_output_records": 14500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000248.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404929234, "job": 160, "event": "table_file_deletion", "file_number": 248}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000246.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404934265, "job": 160, "event": "table_file_deletion", "file_number": 246}
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.840095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:04 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:04 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:05 compute-1 ceph-mon[81715]: pgmap v3992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:05 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:05 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:06 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:06 compute-1 ceph-mon[81715]: pgmap v3993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:06 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:06 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:06 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:06.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:07 compute-1 podman[257989]: 2026-01-22 15:40:07.086623313 +0000 UTC m=+0.058402161 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 15:40:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:07 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:08 compute-1 ceph-mon[81715]: pgmap v3994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:08 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:08 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:08 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:09 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:10 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:10 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:10 compute-1 ceph-mon[81715]: pgmap v3995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:10 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:10 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:10 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:11 compute-1 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:12 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:12 compute-1 ceph-mon[81715]: pgmap v3996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:14 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-1 ceph-mon[81715]: pgmap v3997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:15.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:15 compute-1 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:16 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:16 compute-1 ceph-mon[81715]: pgmap v3998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:17.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:17 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:18 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:18 compute-1 ceph-mon[81715]: pgmap v3999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:40:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:40:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:19.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:19 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:19 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:19 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:20 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:20 compute-1 ceph-mon[81715]: pgmap v4000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:21.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:21 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:22 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:22 compute-1 ceph-mon[81715]: pgmap v4001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:23 compute-1 podman[258011]: 2026-01-22 15:40:23.119414859 +0000 UTC m=+0.093433948 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 15:40:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:23.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:24 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:24 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:25 compute-1 ceph-mon[81715]: pgmap v4002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:25 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:25 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:26 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:26 compute-1 ceph-mon[81715]: pgmap v4003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:27 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:28 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:28 compute-1 ceph-mon[81715]: pgmap v4004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:29 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:29 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:30 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:30 compute-1 ceph-mon[81715]: pgmap v4005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:31.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:31 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #250. Immutable memtables: 0.
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.969364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 161] Flushing memtable with next log file: 250
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431969400, "job": 161, "event": "flush_started", "num_memtables": 1, "num_entries": 636, "num_deletes": 298, "total_data_size": 728167, "memory_usage": 741176, "flush_reason": "Manual Compaction"}
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 161] Level-0 flush table #251: started
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431974811, "cf_name": "default", "job": 161, "event": "table_file_creation", "file_number": 251, "file_size": 476871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124120, "largest_seqno": 124751, "table_properties": {"data_size": 473883, "index_size": 831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9239, "raw_average_key_size": 21, "raw_value_size": 467164, "raw_average_value_size": 1076, "num_data_blocks": 36, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 298, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096405, "oldest_key_time": 1769096405, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 251, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 161] Flush lasted 5484 microseconds, and 1765 cpu microseconds.
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.974849) [db/flush_job.cc:967] [default] [JOB 161] Level-0 flush table #251: 476871 bytes OK
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.974863) [db/memtable_list.cc:519] [default] Level-0 commit table #251 started
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975943) [db/memtable_list.cc:722] [default] Level-0 commit table #251: memtable #1 done
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975954) EVENT_LOG_v1 {"time_micros": 1769096431975951, "job": 161, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 161] Try to delete WAL files size 724377, prev total WAL file size 724377, number of live WAL files 2.
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000247.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976401) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 162] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 161 Base level 0, inputs: [251(465KB)], [249(11MB)]
Jan 22 15:40:31 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431976438, "job": 162, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [251], "files_L6": [249], "score": -1, "input_data_size": 12024027, "oldest_snapshot_seqno": -1}
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 162] Generated table #252: 14329 keys, 10215738 bytes, temperature: kUnknown
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432039925, "cf_name": "default", "job": 162, "event": "table_file_creation", "file_number": 252, "file_size": 10215738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10140138, "index_size": 38687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 394998, "raw_average_key_size": 27, "raw_value_size": 9899258, "raw_average_value_size": 690, "num_data_blocks": 1380, "num_entries": 14329, "num_filter_entries": 14329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 252, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.040156) [db/compaction/compaction_job.cc:1663] [default] [JOB 162] Compacted 1@0 + 1@6 files to L6 => 10215738 bytes
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.042126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.2 rd, 160.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(46.6) write-amplify(21.4) OK, records in: 14934, records dropped: 605 output_compression: NoCompression
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.042143) EVENT_LOG_v1 {"time_micros": 1769096432042134, "job": 162, "event": "compaction_finished", "compaction_time_micros": 63556, "compaction_time_cpu_micros": 25756, "output_level": 6, "num_output_files": 1, "total_output_size": 10215738, "num_input_records": 14934, "num_output_records": 14329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000251.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432042308, "job": 162, "event": "table_file_deletion", "file_number": 251}
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000249.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432044350, "job": 162, "event": "table_file_deletion", "file_number": 249}
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:32 compute-1 ceph-mon[81715]: pgmap v4006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:32 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:33 compute-1 sudo[258037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:33 compute-1 sudo[258037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-1 sudo[258037]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-1 sudo[258062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:40:33 compute-1 sudo[258062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-1 sudo[258062]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-1 sudo[258087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:33 compute-1 sudo[258087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-1 sudo[258087]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-1 sudo[258112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:40:33 compute-1 sudo[258112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-1 sudo[258112]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-1 sudo[258157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:34 compute-1 sudo[258157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:34 compute-1 sudo[258157]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-1 sudo[258182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:40:34 compute-1 sudo[258182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:34 compute-1 sudo[258182]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-1 sudo[258207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:34 compute-1 sudo[258207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:34 compute-1 sudo[258207]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-1 sudo[258232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:40:34 compute-1 sudo[258232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:34 compute-1 sudo[258232]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-1 ceph-mon[81715]: pgmap v4007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:40:34 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:34 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:40:34 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:40:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:35.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:40:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:40:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:36 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:37.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:37 compute-1 ceph-mon[81715]: pgmap v4008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:37 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:38 compute-1 podman[258285]: 2026-01-22 15:40:38.055254377 +0000 UTC m=+0.051614928 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:40:38 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:39.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:39 compute-1 ceph-mon[81715]: pgmap v4009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:39 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:40 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:40 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:41.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:42 compute-1 ceph-mon[81715]: pgmap v4010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:42 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:43 compute-1 sudo[258305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:43 compute-1 sudo[258305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:43 compute-1 sudo[258305]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:43 compute-1 sudo[258330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:40:43 compute-1 sudo[258330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:43 compute-1 sudo[258330]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:43 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:43 compute-1 ceph-mon[81715]: pgmap v4011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:43 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:43 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:45 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:45 compute-1 ceph-mon[81715]: pgmap v4012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:45 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:45 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:40:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:40:47 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:47 compute-1 ceph-mon[81715]: pgmap v4013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:47 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:48 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:48 compute-1 ceph-mon[81715]: pgmap v4014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:49 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:49 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:50 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:50 compute-1 ceph-mon[81715]: pgmap v4015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:50 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:51.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:53 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:53 compute-1 ceph-mon[81715]: pgmap v4016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:54 compute-1 podman[258355]: 2026-01-22 15:40:54.100646484 +0000 UTC m=+0.082912183 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 22 15:40:54 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:55 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:55 compute-1 ceph-mon[81715]: pgmap v4017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:55 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:55 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:56 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:57.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:57 compute-1 ceph-mon[81715]: pgmap v4018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:57 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:58 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:59.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:40:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:59 compute-1 ceph-mon[81715]: pgmap v4019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:00 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:00 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:00 compute-1 ceph-mon[81715]: pgmap v4020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:01.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:41:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:01 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:03.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:41:03 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:03 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:03 compute-1 ceph-mon[81715]: pgmap v4021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:04 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:04 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:05.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:05 compute-1 ceph-mon[81715]: pgmap v4022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:05 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:06 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:07 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:07 compute-1 ceph-mon[81715]: pgmap v4023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:08 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:09 compute-1 podman[258381]: 2026-01-22 15:41:09.056054969 +0000 UTC m=+0.051131335 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:41:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:10 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:10 compute-1 ceph-mon[81715]: pgmap v4024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:11.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:11 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:11 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:11 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:11 compute-1 ceph-mon[81715]: pgmap v4025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:12 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:13.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:13 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:13 compute-1 ceph-mon[81715]: pgmap v4026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:14 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:15.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:15 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:15 compute-1 ceph-mon[81715]: pgmap v4027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:15 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:16 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:17 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:17 compute-1 ceph-mon[81715]: pgmap v4028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:18 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:19 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:19 compute-1 ceph-mon[81715]: pgmap v4029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:41:19 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:41:20 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:20 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:20 compute-1 ceph-mon[81715]: pgmap v4030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:21 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:22 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:22 compute-1 ceph-mon[81715]: pgmap v4031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:23 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:24 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:24 compute-1 ceph-mon[81715]: pgmap v4032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:25.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:25 compute-1 podman[258400]: 2026-01-22 15:41:25.130096963 +0000 UTC m=+0.108737302 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 15:41:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:41:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:25 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:25 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:41:27 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:27 compute-1 ceph-mon[81715]: pgmap v4033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:28 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:28 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:28 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:29 compute-1 ceph-mon[81715]: pgmap v4034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:29 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:30 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:30 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:30 compute-1 ceph-mon[81715]: pgmap v4035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:31 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:33 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:33 compute-1 ceph-mon[81715]: pgmap v4036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:33.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:34 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:35 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:35 compute-1 ceph-mon[81715]: pgmap v4037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:35 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:35 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:36 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:36 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:41:37 compute-1 ceph-mon[81715]: pgmap v4038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:37 compute-1 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:38 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:39 compute-1 ceph-mon[81715]: pgmap v4039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:39 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:40 compute-1 podman[258426]: 2026-01-22 15:41:40.067928372 +0000 UTC m=+0.055418981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:41:40 compute-1 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:40 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:40 compute-1 ceph-mon[81715]: pgmap v4040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:41.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:41.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:41 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:42 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:42 compute-1 ceph-mon[81715]: pgmap v4041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:43 compute-1 sudo[258445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:43 compute-1 sudo[258445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:43 compute-1 sudo[258445]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:43 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:43 compute-1 sudo[258470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:41:43 compute-1 sudo[258470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:43 compute-1 sudo[258470]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:43 compute-1 sudo[258495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:43 compute-1 sudo[258495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:43 compute-1 sudo[258495]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-1 sudo[258520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:41:44 compute-1 sudo[258520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-1 sudo[258520]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:44 compute-1 ceph-mon[81715]: pgmap v4042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:41:44 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:41:44 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:45.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:46 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:47 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:47 compute-1 ceph-mon[81715]: pgmap v4043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:47 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:41:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:41:48 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:41:48 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:41:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:49 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:49 compute-1 ceph-mon[81715]: pgmap v4044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:49 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:50 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:50 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:51.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:51 compute-1 ceph-mon[81715]: pgmap v4045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:51 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:52 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:53 compute-1 ceph-mon[81715]: pgmap v4046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:53 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:53 compute-1 sudo[258577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:53 compute-1 sudo[258577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:53 compute-1 sudo[258577]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:53 compute-1 sudo[258602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:41:53 compute-1 sudo[258602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:53 compute-1 sudo[258602]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:54 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:54 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:55 compute-1 ceph-mon[81715]: pgmap v4047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:55 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:55 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:56 compute-1 podman[258627]: 2026-01-22 15:41:56.10704384 +0000 UTC m=+0.093787757 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:41:56 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:57 compute-1 ceph-mon[81715]: pgmap v4048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:57 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:59 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:59 compute-1 ceph-mon[81715]: pgmap v4049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:41:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:00 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:00 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:00 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:01 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:01 compute-1 ceph-mon[81715]: pgmap v4050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:01 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:02 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:02 compute-1 ceph-mon[81715]: pgmap v4051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:04 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:05 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:05 compute-1 ceph-mon[81715]: pgmap v4052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:05 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:05 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:06 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:06 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:07.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:07 compute-1 ceph-mon[81715]: pgmap v4053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:07 compute-1 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:09 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:09 compute-1 ceph-mon[81715]: pgmap v4054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:10 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:10 compute-1 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:10 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:11 compute-1 podman[258653]: 2026-01-22 15:42:11.054857918 +0000 UTC m=+0.047845855 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:42:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:11.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:11.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:11 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:11 compute-1 ceph-mon[81715]: pgmap v4055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:13.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:13 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:13 compute-1 ceph-mon[81715]: pgmap v4056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:13.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:14 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:14 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:15.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:15 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-1 ceph-mon[81715]: pgmap v4057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:15 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:15 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:15.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:15 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:16 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:17.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:17 compute-1 ceph-mon[81715]: pgmap v4058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:18 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:18 compute-1 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:19.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:20 compute-1 ceph-mon[81715]: pgmap v4059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:42:20 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:42:20 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:20 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:21 compute-1 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:21 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-1 ceph-mon[81715]: pgmap v4060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:21 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:22 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:23 compute-1 ceph-mon[81715]: pgmap v4061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:23 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:25 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:25 compute-1 ceph-mon[81715]: pgmap v4062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:25 compute-1 ceph-mon[81715]: Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:25.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:25 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:26 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:26 compute-1 ceph-mon[81715]: pgmap v4063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:27 compute-1 podman[258672]: 2026-01-22 15:42:27.106324079 +0000 UTC m=+0.096302486 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 15:42:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:27 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:28 compute-1 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:28 compute-1 ceph-mon[81715]: pgmap v4064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:29 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:30 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:31 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:31 compute-1 ceph-mon[81715]: Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:31 compute-1 ceph-mon[81715]: pgmap v4065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:31 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:42:31 compute-1 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:42:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:31.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:32 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:33 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:33 compute-1 ceph-mon[81715]: pgmap v4066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:34 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:34 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:35 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:35 compute-1 ceph-mon[81715]: pgmap v4067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:35 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:36 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:37 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:37 compute-1 ceph-mon[81715]: pgmap v4068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:38 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:39 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:39 compute-1 ceph-mon[81715]: pgmap v4069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:40 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:40 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:41 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:41 compute-1 ceph-mon[81715]: pgmap v4070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:42 compute-1 podman[258700]: 2026-01-22 15:42:42.066954715 +0000 UTC m=+0.048179126 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 15:42:42 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:43 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:43 compute-1 ceph-mon[81715]: pgmap v4071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:44 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:45 compute-1 ceph-mon[81715]: pgmap v4072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:45 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:45 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:46 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:46 compute-1 ceph-mon[81715]: pgmap v4073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:42:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:42:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:47 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:49 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:49 compute-1 ceph-mon[81715]: pgmap v4074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:49.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:50 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:50 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:50 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:51 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:51 compute-1 ceph-mon[81715]: pgmap v4075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:52 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:53 compute-1 ceph-mon[81715]: pgmap v4076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:53 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:53.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:53 compute-1 sudo[258719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:53 compute-1 sudo[258719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:53 compute-1 sudo[258719]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-1 sudo[258744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:42:54 compute-1 sudo[258744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-1 sudo[258744]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-1 sudo[258769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:54 compute-1 sudo[258769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-1 sudo[258769]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-1 sudo[258794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:42:54 compute-1 sudo[258794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:54 compute-1 sudo[258794]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:55 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:55 compute-1 ceph-mon[81715]: pgmap v4077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:42:55 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:42:55 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:56 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:57 compute-1 ceph-mon[81715]: pgmap v4078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:57 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:57.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:58 compute-1 podman[258850]: 2026-01-22 15:42:58.127860202 +0000 UTC m=+0.111611610 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 15:42:58 compute-1 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:59 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:42:59 compute-1 ceph-mon[81715]: pgmap v4079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:42:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:00 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:00 compute-1 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:01.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:01 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:01 compute-1 ceph-mon[81715]: pgmap v4080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:01 compute-1 sudo[258877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:01 compute-1 sudo[258877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:01 compute-1 sudo[258877]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:01 compute-1 sudo[258902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:43:01 compute-1 sudo[258902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:01 compute-1 sudo[258902]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:02 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:43:02 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:43:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:03.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:03 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:03 compute-1 ceph-mon[81715]: pgmap v4081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:04 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:05 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:05 compute-1 ceph-mon[81715]: pgmap v4082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:05 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:06 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:07.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:07 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:07 compute-1 ceph-mon[81715]: pgmap v4083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:07.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:08 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:09 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:09 compute-1 ceph-mon[81715]: pgmap v4084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:09.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:10 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:10 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:10 compute-1 ceph-mon[81715]: pgmap v4085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:11 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:13 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:13 compute-1 ceph-mon[81715]: pgmap v4086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:13 compute-1 podman[258927]: 2026-01-22 15:43:13.053587293 +0000 UTC m=+0.044986178 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:43:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:14 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:15 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-1 ceph-mon[81715]: pgmap v4087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:15 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:17.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:17 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:17 compute-1 ceph-mon[81715]: pgmap v4088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:18 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:18 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:43:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:43:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:19 compute-1 ceph-mon[81715]: pgmap v4089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:19 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:20 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:20 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:20 compute-1 ceph-mon[81715]: pgmap v4090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:21 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:21.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:22 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:22 compute-1 ceph-mon[81715]: pgmap v4091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #253. Immutable memtables: 0.
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.647773) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 163] Flushing memtable with next log file: 253
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602647803, "job": 163, "event": "flush_started", "num_memtables": 1, "num_entries": 2748, "num_deletes": 540, "total_data_size": 5066096, "memory_usage": 5144136, "flush_reason": "Manual Compaction"}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 163] Level-0 flush table #254: started
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602669790, "cf_name": "default", "job": 163, "event": "table_file_creation", "file_number": 254, "file_size": 3292028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124756, "largest_seqno": 127499, "table_properties": {"data_size": 3281741, "index_size": 5692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32521, "raw_average_key_size": 23, "raw_value_size": 3257045, "raw_average_value_size": 2344, "num_data_blocks": 239, "num_entries": 1389, "num_filter_entries": 1389, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096432, "oldest_key_time": 1769096432, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 254, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 163] Flush lasted 22059 microseconds, and 10168 cpu microseconds.
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669829) [db/flush_job.cc:967] [default] [JOB 163] Level-0 flush table #254: 3292028 bytes OK
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669848) [db/memtable_list.cc:519] [default] Level-0 commit table #254 started
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672489) [db/memtable_list.cc:722] [default] Level-0 commit table #254: memtable #1 done
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672506) EVENT_LOG_v1 {"time_micros": 1769096602672500, "job": 163, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 163] Try to delete WAL files size 5052548, prev total WAL file size 5052548, number of live WAL files 2.
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000250.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.673996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 164] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 163 Base level 0, inputs: [254(3214KB)], [252(9976KB)]
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602674034, "job": 164, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [254], "files_L6": [252], "score": -1, "input_data_size": 13507766, "oldest_snapshot_seqno": -1}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 164] Generated table #255: 14621 keys, 11671635 bytes, temperature: kUnknown
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602753491, "cf_name": "default", "job": 164, "event": "table_file_creation", "file_number": 255, "file_size": 11671635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11592585, "index_size": 41369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36613, "raw_key_size": 400577, "raw_average_key_size": 27, "raw_value_size": 11345137, "raw_average_value_size": 775, "num_data_blocks": 1495, "num_entries": 14621, "num_filter_entries": 14621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 255, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.753851) [db/compaction/compaction_job.cc:1663] [default] [JOB 164] Compacted 1@0 + 1@6 files to L6 => 11671635 bytes
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.755109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.8 rd, 146.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 15718, records dropped: 1097 output_compression: NoCompression
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.755130) EVENT_LOG_v1 {"time_micros": 1769096602755121, "job": 164, "event": "compaction_finished", "compaction_time_micros": 79554, "compaction_time_cpu_micros": 45668, "output_level": 6, "num_output_files": 1, "total_output_size": 11671635, "num_input_records": 15718, "num_output_records": 14621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000254.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602756085, "job": 164, "event": "table_file_deletion", "file_number": 254}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000252.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602758530, "job": 164, "event": "table_file_deletion", "file_number": 252}
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.673888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:23 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:23.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:24 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:24 compute-1 ceph-mon[81715]: pgmap v4092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:25.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:25 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:25 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:26 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:26 compute-1 ceph-mon[81715]: pgmap v4093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:27 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:29 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:29 compute-1 ceph-mon[81715]: pgmap v4094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:29 compute-1 podman[258946]: 2026-01-22 15:43:29.151995195 +0000 UTC m=+0.127481610 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:43:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:30 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:30 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:30 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:31 compute-1 ceph-mon[81715]: pgmap v4095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:31 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:32 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:33 compute-1 ceph-mon[81715]: pgmap v4096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:33 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:34 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:35 compute-1 ceph-mon[81715]: pgmap v4097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:35 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:35 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:36 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:37 compute-1 ceph-mon[81715]: pgmap v4098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:37 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:38 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:39 compute-1 ceph-mon[81715]: pgmap v4099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:39 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #256. Immutable memtables: 0.
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.131950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 165] Flushing memtable with next log file: 256
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620132039, "job": 165, "event": "flush_started", "num_memtables": 1, "num_entries": 501, "num_deletes": 287, "total_data_size": 454547, "memory_usage": 464376, "flush_reason": "Manual Compaction"}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 165] Level-0 flush table #257: started
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620136556, "cf_name": "default", "job": 165, "event": "table_file_creation", "file_number": 257, "file_size": 297704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 127504, "largest_seqno": 128000, "table_properties": {"data_size": 295140, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7222, "raw_average_key_size": 19, "raw_value_size": 289573, "raw_average_value_size": 774, "num_data_blocks": 23, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 287, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096603, "oldest_key_time": 1769096603, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 257, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 165] Flush lasted 4644 microseconds, and 2546 cpu microseconds.
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136607) [db/flush_job.cc:967] [default] [JOB 165] Level-0 flush table #257: 297704 bytes OK
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136627) [db/memtable_list.cc:519] [default] Level-0 commit table #257 started
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138360) [db/memtable_list.cc:722] [default] Level-0 commit table #257: memtable #1 done
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138374) EVENT_LOG_v1 {"time_micros": 1769096620138370, "job": 165, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 165] Try to delete WAL files size 451377, prev total WAL file size 451377, number of live WAL files 2.
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000253.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0036303432' seq:72057594037927935, type:22 .. '6C6F676D0036323937' seq:0, type:0; will stop at (end)
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 166] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 165 Base level 0, inputs: [257(290KB)], [255(11MB)]
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620138969, "job": 166, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [257], "files_L6": [255], "score": -1, "input_data_size": 11969339, "oldest_snapshot_seqno": -1}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 166] Generated table #258: 14412 keys, 11805136 bytes, temperature: kUnknown
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620208137, "cf_name": "default", "job": 166, "event": "table_file_creation", "file_number": 258, "file_size": 11805136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11726941, "index_size": 41090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 397078, "raw_average_key_size": 27, "raw_value_size": 11482650, "raw_average_value_size": 796, "num_data_blocks": 1479, "num_entries": 14412, "num_filter_entries": 14412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 258, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.208348) [db/compaction/compaction_job.cc:1663] [default] [JOB 166] Compacted 1@0 + 1@6 files to L6 => 11805136 bytes
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.209594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.9 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(79.9) write-amplify(39.7) OK, records in: 14995, records dropped: 583 output_compression: NoCompression
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.209608) EVENT_LOG_v1 {"time_micros": 1769096620209602, "job": 166, "event": "compaction_finished", "compaction_time_micros": 69233, "compaction_time_cpu_micros": 28963, "output_level": 6, "num_output_files": 1, "total_output_size": 11805136, "num_input_records": 14995, "num_output_records": 14412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000257.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620209766, "job": 166, "event": "table_file_deletion", "file_number": 257}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000255.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620211937, "job": 166, "event": "table_file_deletion", "file_number": 255}
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:40 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:40 compute-1 ceph-mon[81715]: pgmap v4100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:42 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:43 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:43 compute-1 ceph-mon[81715]: pgmap v4101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:43.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:44 compute-1 podman[258973]: 2026-01-22 15:43:44.057396826 +0000 UTC m=+0.047647170 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:43:44 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:44 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:45 compute-1 ceph-mon[81715]: pgmap v4102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:45 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:45 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:45.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:46 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:47.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.544 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:43:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.544 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:43:47 compute-1 ceph-mon[81715]: pgmap v4103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:47 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:49 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:50 compute-1 ceph-mon[81715]: pgmap v4104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:50 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:51 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:51 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:51 compute-1 ceph-mon[81715]: pgmap v4105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:52 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:53.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:53 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-1 ceph-mon[81715]: pgmap v4106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:53 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:54 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:55 compute-1 ceph-mon[81715]: pgmap v4107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:55 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:55 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:56 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:57 compute-1 ceph-mon[81715]: pgmap v4108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:57 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:58 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:58 compute-1 ceph-mon[81715]: pgmap v4109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:59.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:43:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:59 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:00 compute-1 podman[258992]: 2026-01-22 15:44:00.147465982 +0000 UTC m=+0.131294282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 15:44:00 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:00 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:00 compute-1 ceph-mon[81715]: pgmap v4110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:01.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:01 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:02 compute-1 sudo[259018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:02 compute-1 sudo[259018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-1 sudo[259018]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-1 sudo[259043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:44:02 compute-1 sudo[259043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-1 sudo[259043]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-1 sudo[259068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:02 compute-1 sudo[259068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-1 sudo[259068]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-1 sudo[259093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:44:02 compute-1 sudo[259093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-1 sudo[259093]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:02 compute-1 ceph-mon[81715]: pgmap v4111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:03.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:03 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:44:03 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:44:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:05.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:05 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:05 compute-1 ceph-mon[81715]: pgmap v4112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:06 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:06 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:06 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:07 compute-1 ceph-mon[81715]: pgmap v4113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:07 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:08 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:09 compute-1 sudo[259148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:09 compute-1 sudo[259148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:09 compute-1 sudo[259148]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:09 compute-1 sudo[259173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:44:09 compute-1 sudo[259173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:09 compute-1 sudo[259173]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:09 compute-1 ceph-mon[81715]: pgmap v4114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:09 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:09 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:10 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:10 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:11 compute-1 ceph-mon[81715]: pgmap v4115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:11 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:12 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:13.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:13 compute-1 ceph-mon[81715]: pgmap v4116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:13 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:14 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:14 compute-1 ceph-mon[81715]: pgmap v4117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:15 compute-1 podman[259199]: 2026-01-22 15:44:15.084864498 +0000 UTC m=+0.060972591 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 15:44:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:15.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:16 compute-1 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:16 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:16 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:16 compute-1 ceph-mon[81715]: pgmap v4118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:17.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:17 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:18 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:18 compute-1 ceph-mon[81715]: pgmap v4119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:44:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:44:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:19.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:20 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:21 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:21 compute-1 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:21 compute-1 ceph-mon[81715]: pgmap v4120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:21.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:21.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:22 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:23.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:24 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:24 compute-1 ceph-mon[81715]: pgmap v4121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:24 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:25 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:25 compute-1 ceph-mon[81715]: pgmap v4122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:25.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:26 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:26 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:27 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:27 compute-1 ceph-mon[81715]: pgmap v4123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:27.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:28 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:29 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:29 compute-1 ceph-mon[81715]: pgmap v4124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:29.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:30 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:30 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:30 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:31 compute-1 podman[259219]: 2026-01-22 15:44:31.134520071 +0000 UTC m=+0.117725256 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:32 compute-1 ceph-mon[81715]: pgmap v4125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:32 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:33 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:33 compute-1 ceph-mon[81715]: pgmap v4126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:34 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:44:34 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.5 total, 600.0 interval
                                           Cumulative writes: 17K writes, 51K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                           Cumulative WAL: 17K writes, 6346 syncs, 2.79 writes per sync, written: 0.04 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 709 writes, 1322 keys, 709 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 709 writes, 322 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:44:35 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:35 compute-1 ceph-mon[81715]: pgmap v4127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:35.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:36 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:36 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:37 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:37 compute-1 ceph-mon[81715]: pgmap v4128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:38 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:38 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:39 compute-1 ceph-mon[81715]: pgmap v4129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:39 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:39.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:40 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:40 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:41 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:41 compute-1 ceph-mon[81715]: pgmap v4130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:42 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:43 compute-1 ceph-mon[81715]: pgmap v4131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:43 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:44 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:44 compute-1 ceph-mon[81715]: pgmap v4132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:45 compute-1 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:45 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:46 compute-1 podman[259244]: 2026-01-22 15:44:46.092558215 +0000 UTC m=+0.085051831 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:44:47 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:47 compute-1 ceph-mon[81715]: pgmap v4133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.545 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:44:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:44:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:48 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:49 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-1 ceph-mon[81715]: pgmap v4134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:49 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:51 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:51 compute-1 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:51 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:51 compute-1 ceph-mon[81715]: pgmap v4135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:52 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:52 compute-1 ceph-mon[81715]: pgmap v4136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:53 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:54 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:54 compute-1 ceph-mon[81715]: pgmap v4137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:55 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:55 compute-1 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:56 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:56 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:56 compute-1 ceph-mon[81715]: pgmap v4138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:57 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:57 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:57 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:57 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:58 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:58 compute-1 ceph-mon[81715]: pgmap v4139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:59 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:44:59 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:59 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:59 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:01 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:01 compute-1 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:01 compute-1 ceph-mon[81715]: pgmap v4140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:01 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:01 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:01 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:01 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:02 compute-1 podman[259265]: 2026-01-22 15:45:02.116453174 +0000 UTC m=+0.110452178 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 15:45:02 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-1 ceph-mon[81715]: pgmap v4141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:03 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:03 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:03 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:03 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:03.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:04 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:05 compute-1 ceph-mon[81715]: pgmap v4142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:05 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:05 compute-1 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:05.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:05 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:05 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:05 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:06 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:06 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:07 compute-1 ceph-mon[81715]: pgmap v4143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:07 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:07 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:07 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:07 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:08 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:09 compute-1 ceph-mon[81715]: pgmap v4144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:09 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:09 compute-1 sudo[259290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:09 compute-1 sudo[259290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-1 sudo[259290]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-1 sudo[259315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:45:09 compute-1 sudo[259315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-1 sudo[259315]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-1 sudo[259340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:09 compute-1 sudo[259340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-1 sudo[259340]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-1 sudo[259365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:45:09 compute-1 sudo[259365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:09 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:09 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:09.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:10 compute-1 sudo[259365]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:10 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:10 compute-1 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:45:10 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:11 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:11 compute-1 ceph-mon[81715]: pgmap v4145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:11 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:11 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:11 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:11 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:12 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:13 compute-1 ceph-mon[81715]: pgmap v4146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:13 compute-1 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:13 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:13 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:13 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:14 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:15 compute-1 ceph-mon[81715]: pgmap v4147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:15 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:15 compute-1 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:15 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:15 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:15 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:16 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:16 compute-1 sudo[259422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:16 compute-1 sudo[259422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:16 compute-1 sudo[259422]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:16 compute-1 podman[259446]: 2026-01-22 15:45:16.505964691 +0000 UTC m=+0.053227071 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 15:45:16 compute-1 sudo[259453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:45:16 compute-1 sudo[259453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:16 compute-1 sudo[259453]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:16 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:16 compute-1 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:16 compute-1 ceph-mon[81715]: pgmap v4148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:17 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:17 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:17 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:17 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:18 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:18 compute-1 ceph-mon[81715]: pgmap v4149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:45:18 compute-1 ceph-mon[81715]: from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:19 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:19 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:19 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:20 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:21 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:21 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:21 compute-1 ceph-mon[81715]: pgmap v4150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:21 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:21 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:21 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:22 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:23 compute-1 ceph-mon[81715]: pgmap v4151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:23 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:23 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:23 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:23 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:24 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:25 compute-1 ceph-mon[81715]: pgmap v4152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:25 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:25 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:25 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:25 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:25 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:26 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:26 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:26 compute-1 ceph-mon[81715]: pgmap v4153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:27 compute-1 sshd-session[259491]: Accepted publickey for zuul from 192.168.122.10 port 36066 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 15:45:27 compute-1 systemd-logind[787]: New session 51 of user zuul.
Jan 22 15:45:27 compute-1 systemd[1]: Started Session 51 of User zuul.
Jan 22 15:45:27 compute-1 sshd-session[259491]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 15:45:27 compute-1 sudo[259495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 22 15:45:27 compute-1 sudo[259495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 15:45:27 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:27 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:27 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:28 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:28 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:29 compute-1 ceph-mon[81715]: pgmap v4154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:29 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:29 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:29 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:29 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:29.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:30 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:30 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:30 compute-1 ceph-mon[81715]: from='client.27434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:30 compute-1 ceph-mon[81715]: pgmap v4155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 15:45:31 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3667014604' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:31 compute-1 ceph-mon[81715]: from='client.18522 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:31 compute-1 ceph-mon[81715]: from='client.27443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3667014604' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: from='client.18528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3055205003' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:31 compute-1 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 23K writes, 129K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 23K writes, 23K syncs, 1.00 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1823 writes, 10K keys, 1823 commit groups, 1.0 writes per commit group, ingest: 16.88 MB, 0.03 MB/s
                                           Interval WAL: 1823 writes, 1823 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.6      1.71              0.46        83    0.021       0      0       0.0       0.0
                                             L6      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   6.0    138.4    120.1      6.83              2.58        82    0.083    926K    51K       0.0       0.0
                                            Sum      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.9      0.1       0.0   7.0    110.6    111.9      8.54              3.04       165    0.052    926K    51K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    136.9    137.7      0.57              0.27        12    0.047     91K   4912       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   0.0    138.4    120.1      6.83              2.58        82    0.083    926K    51K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     79.7      1.71              0.46        82    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.133, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.93 GB write, 0.12 MB/s write, 0.92 GB read, 0.12 MB/s read, 8.5 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 96.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000592 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5008,90.63 MB,29.8109%) FilterBlock(165,2.52 MB,0.828045%) IndexBlock(165,3.05 MB,1.00344%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:45:31 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:31 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:31 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:33 compute-1 podman[259748]: 2026-01-22 15:45:33.192408509 +0000 UTC m=+0.119483704 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 15:45:33 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:33 compute-1 ceph-mon[81715]: from='client.28636 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:33 compute-1 ceph-mon[81715]: pgmap v4156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:33 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:33 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:33 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:34 compute-1 ovs-vsctl[259803]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 15:45:34 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:34 compute-1 ceph-mon[81715]: from='client.28642 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:34 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/238792465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:34 compute-1 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 15:45:34 compute-1 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 15:45:34 compute-1 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 15:45:35 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-1 ceph-mon[81715]: pgmap v4157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:35 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:35.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:35 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: cache status {prefix=cache status} (starting...)
Jan 22 15:45:35 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:35 compute-1 lvm[260120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 15:45:35 compute-1 lvm[260120]: VG ceph_vg0 finished
Jan 22 15:45:35 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: client ls {prefix=client ls} (starting...)
Jan 22 15:45:35 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:35 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:35 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:35 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:36 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 15:45:36 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/624135693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:36 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 15:45:36 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3217916333' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 15:45:36 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 15:45:37 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4183734222' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.27476 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/624135693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: pgmap v4158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:37 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3217916333' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4183734222' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: ops {prefix=ops} (starting...)
Jan 22 15:45:37 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 15:45:37 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3749877864' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 15:45:37 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1858915271' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:37 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:37 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:37 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:37 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 15:45:37 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1030413829' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: session ls {prefix=session ls} (starting...)
Jan 22 15:45:38 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 15:45:38 compute-1 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: status {prefix=status} (starting...)
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.18543 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.27494 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3475687693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.18555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3749877864' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1858915271' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3333500352' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1030413829' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1934319290' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 15:45:38 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1857267379' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 15:45:38 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2297523029' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:38 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 15:45:38 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/606502140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 15:45:39 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3170290335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 15:45:39 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/158912215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:39 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 15:45:39 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1158656698' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:39 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:39 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:39 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 15:45:40 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2912699497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.18579 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.27533 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1857267379' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: pgmap v4159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1234990876' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/347329438' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2297523029' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/606502140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/129939776' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3170290335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/158912215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 15:45:40 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/226829701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:40 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 15:45:40 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1073452591' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 15:45:41 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2501336058' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.28666 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.18606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.28672 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.18618 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4162337217' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.27569 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1538690967' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1158656698' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2747367764' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1040615603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2912699497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1628751414' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3603627911' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: pgmap v4160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.18657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3339043403' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/226829701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4136528779' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1073452591' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2675750719' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2938522680' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3865448621' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2501336058' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1892410733' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:01.038111+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:02.038324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:03.038554+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:04.038912+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:05.039133+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:06.039269+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:07.039470+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:08.039730+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:09.039928+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:10.040346+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:11.040536+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:12.040722+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:13.040995+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:14.041124+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:15.041246+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:16.041390+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:17.041584+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:18.041796+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:19.041948+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:20.042206+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:21.042432+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:22.042618+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:23.042813+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:24.043046+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:25.043196+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:26.043416+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:27.043617+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:28.043783+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:29.043916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:30.044149+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:31.044289+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:32.044540+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:33.044788+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:34.044967+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:35.045208+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:36.045455+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:37.045718+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:38.045954+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:39.046113+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:40.046353+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:41.046574+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:42.046765+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:43.046921+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:44.047067+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:45.047199+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:46.047355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:47.047542+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:48.047726+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:49.047915+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:50.048178+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:51.048328+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:52.048530+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:53.048907+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:54.049137+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:55.049354+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:56.049541+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:57.049686+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:58.049880+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:12:59.050075+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:00.050264+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:01.050413+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:02.050547+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:03.050741+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:04.050872+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:05.051077+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:06.051261+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:07.051408+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:08.051635+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:09.051885+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:10.052112+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:11.052340+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:12.052514+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:13.052706+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:14.052991+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:15.053281+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:16.053572+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:17.053717+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:18.053865+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:19.054022+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:20.054252+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:21.054447+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 117.671112061s of 117.708808899s, submitted: 13
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:22.054584+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:23.054716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2258056 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4db41e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:24.054868+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:25.055062+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:26.055233+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:27.055390+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:28.055553+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2258056 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:29.055736+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:30.055958+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f2954780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5157800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:31.056115+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5157800 session 0x55b6f464d860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:32.056258+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:33.056400+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253662 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:34.056581+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:35.056762+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:36.056913+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:37.057138+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4d59a40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4680800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680800 session 0x55b6f4fca3c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:38.057323+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253342 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:39.057469+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:40.057711+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4d62000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.354337692s of 18.391880035s, submitted: 10
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4d62000 session 0x55b6f2bdb680
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:41.057845+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:42.058012+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:43.058179+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2255170 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f41cef00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:44.058317+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156000 session 0x55b6f227da40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:45.058459+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 39288832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:46.058605+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4d89000/0x0/0x1bfc00000, data 0x76fcc6c/0x6cd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 39288832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:47.058712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b42bf000/0x0/0x1bfc00000, data 0x81c6c6c/0x779f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133922816 unmapped: 39256064 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:48.058818+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345981 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:49.058955+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:50.059086+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b42bf000/0x0/0x1bfc00000, data 0x81c6c6c/0x779f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:51.059199+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:52.059335+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:53.059482+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345981 data_alloc: 218103808 data_used: 18550784
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 39239680 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.394145966s of 13.541505814s, submitted: 38
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:54.059608+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4e00780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:55.059763+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:56.059971+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:57.060129+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:58.060294+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:13:59.060434+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:00.060695+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:01.060865+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:02.061080+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:03.061216+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:04.061332+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 15K writes, 47K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 15K writes, 5211 syncs, 2.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 872 writes, 1903 keys, 872 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s
                                           Interval WAL: 872 writes, 408 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:05.061563+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:06.061792+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:07.061976+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:08.062147+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:09.062352+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:10.062809+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:11.062999+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:12.063267+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:13.063478+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:14.063692+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:15.063861+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:16.064075+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:17.064261+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:18.064491+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:19.064633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:20.064840+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:21.065045+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:22.065192+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:23.065382+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:24.065516+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:25.065712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:26.065861+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:27.066033+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:28.066193+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:29.066373+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:30.066612+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:31.066871+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:32.067102+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:33.067300+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:34.067463+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:35.067642+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:36.067836+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:37.068008+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:38.068145+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:39.068293+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:40.068483+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:41.068620+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:42.068776+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:43.068956+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f73da400 session 0x55b6f4dbf4a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4680800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:44.069124+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:45.069259+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:46.069398+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:47.069574+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:48.069784+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:49.070232+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:50.070409+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:51.070760+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:52.070963+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:53.071153+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:54.071312+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:55.071644+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:56.071862+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:57.071999+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:58.072185+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:14:59.072382+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:00.072561+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:01.072745+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:02.072926+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:03.073184+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:04.073309+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:05.073497+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:06.073706+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:07.073874+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:08.074088+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:09.074287+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:10.074491+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:11.074727+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:12.074843+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:13.075042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:14.075232+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:15.075378+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:16.075521+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:17.075716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:18.075836+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:19.075979+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:20.076116+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:21.076254+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:22.076403+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:23.076552+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:24.076730+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:25.076864+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:26.076999+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:27.077152+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:28.077303+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:29.077447+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:30.077635+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5bc00 session 0x55b6f4fca780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f73da400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:31.077782+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:32.077924+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2006c00 session 0x55b6f4dda960
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b5bc00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:33.078091+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:34.078235+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:35.078423+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:36.078576+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:37.078742+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:38.078908+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:39.079125+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:40.079361+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:41.079504+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:42.079737+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:43.079928+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:44.080100+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:45.080277+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:46.080446+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:47.080614+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f722f000 session 0x55b6f2ff63c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4d62000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:48.080789+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:49.080997+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:50.081169+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:51.081310+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:52.081459+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:53.081715+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:54.081866+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:55.082028+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:56.082168+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:57.082410+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:58.082595+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:15:59.082851+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:00.083102+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:01.083696+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5c2bc00 session 0x55b6f1996d20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:02.083862+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:03.084039+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:04.084240+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:05.084428+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:06.084624+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:07.084787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:08.084924+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:09.085067+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:10.085235+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:11.085429+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:12.085580+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:13.085778+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:14.085916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:15.086088+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:16.086300+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:17.086482+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:18.086752+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:19.086886+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:20.087042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:21.087173+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:22.087346+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f40df000 session 0x55b6f4b1d4a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459f800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:23.087477+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:24.087594+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:25.087721+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459e400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:26.087841+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:27.087928+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459e400 session 0x55b6f4de3860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459ec00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459ec00 session 0x55b6f2a743c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:28.088123+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 39559168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:29.088257+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298361 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 39559168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f514c400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 155.705093384s of 155.759750366s, submitted: 19
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514c400 session 0x55b6f227dc20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e4000/0x0/0x1bfc00000, data 0x7ba0c7c/0x717a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f514d400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:30.088383+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 39510016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514d400 session 0x55b6f29543c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:31.088498+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 39510016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:32.088641+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c1a/0x7179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 39501824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:33.088832+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133685248 unmapped: 39493632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:34.089034+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299837 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133685248 unmapped: 39493632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4db5a40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459e400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:35.089163+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133783552 unmapped: 39395328 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:36.089261+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 38297600 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:37.089357+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 38264832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:38.089484+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 38199296 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:39.089611+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 38158336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.372165680s of 10.111143112s, submitted: 313
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:40.089793+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 38141952 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:41.089885+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 38125568 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:42.090035+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135069696 unmapped: 38109184 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:43.090202+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:44.090368+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:45.090514+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:46.090740+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:47.090910+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:48.091110+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:49.091262+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:50.091451+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:51.091607+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:52.091758+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:53.091901+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:54.092028+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:55.092194+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:56.092327+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:57.092474+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:58.092640+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:16:59.092866+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:00.093101+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:01.093300+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:02.093439+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:03.093650+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:04.093865+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:05.094058+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:06.094262+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:07.094410+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 38076416 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:08.094595+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 38076416 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:09.094877+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:10.095060+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:11.095355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:12.095556+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:13.095801+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:14.095991+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:15.096274+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f722f400 session 0x55b6f4dbe1e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459ec00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:16.096435+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:17.096642+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:18.096805+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:19.097011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:20.097240+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:21.097463+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:22.097748+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:23.097933+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:24.098069+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:25.098233+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:26.098375+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:27.098623+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:28.098823+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:29.099028+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:30.099337+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:31.099560+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2027c00 session 0x55b6f4db4b40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f514c400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:32.099775+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:33.100022+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:34.100200+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:35.100428+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:36.100787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:37.101010+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:38.101273+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:39.101552+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:40.101788+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:41.102817+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:42.102965+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:43.103240+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:44.103558+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:45.103809+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:46.104055+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:47.104242+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:48.104504+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:49.104714+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:50.104986+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:51.105214+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:52.105419+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:53.105784+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:54.105955+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:55.106118+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:56.106303+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:57.106530+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:58.106729+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:17:59.107356+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:00.108042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:01.108513+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:02.109486+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:03.109891+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:04.110721+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:05.111103+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:06.112081+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:07.112596+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:08.114291+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:09.114634+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:10.115273+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:11.115440+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:12.115592+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:13.116635+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:14.117011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:15.117221+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:16.117482+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:17.117764+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:18.117918+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:19.118212+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:20.118509+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:21.118717+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:22.118981+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:23.119112+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:24.119364+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:25.119490+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:26.119731+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:27.119857+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:28.119997+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:29.120258+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:30.120457+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:31.120583+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:32.120709+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:33.120828+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 15:45:41 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2764929686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:34.120956+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:35.121127+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:36.121247+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:37.121439+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:38.121608+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:39.121768+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:40.122026+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:41.122202+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:42.122372+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:43.122534+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:44.122733+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:45.122880+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:46.123049+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:47.123184+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:48.123336+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:49.123537+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:50.123777+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:51.123959+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:52.124092+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:53.124226+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:54.124431+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4681400 session 0x55b6f29554a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7442c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:55.124644+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:56.124847+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:57.124994+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:58.125153+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:18:59.125304+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:00.125611+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:01.125747+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:02.125877+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:03.126012+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:04.126385+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:05.126606+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:06.126778+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:07.127096+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:08.127387+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:09.127636+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:10.127913+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:11.128149+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:12.128313+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b59400 session 0x55b6f227cf00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7443000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:13.128493+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:14.128637+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f65e2400 session 0x55b6f3ed5680
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5084000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231000 session 0x55b6f41ced20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4ff0400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:15.128827+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:16.128984+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:17.129154+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:18.129327+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:19.129478+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:20.129742+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:21.129915+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:22.130063+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:23.130220+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:24.130348+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:25.130485+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:26.130609+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5089000 session 0x55b6f4f60960
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f87400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:27.130739+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:28.130976+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:29.131149+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:30.131417+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:31.131606+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:32.131781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:33.131993+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:34.132145+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:35.132309+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:36.132462+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:37.132588+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:38.132704+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:39.132841+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:40.132999+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:41.133155+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:42.133330+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:43.133503+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:44.133642+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:45.133763+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:46.133933+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:47.134049+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:48.134199+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:49.134329+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:50.134492+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:51.134633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:52.134758+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:53.134903+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:54.135022+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:55.135144+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:56.135292+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:57.135440+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:58.135556+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:19:59.135633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:00.135885+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:01.136028+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:02.136190+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:03.136341+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:04.136480+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:05.136642+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:06.136990+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:07.137153+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:08.137285+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:09.137425+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:10.137613+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:11.137766+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:12.137934+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:13.138079+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:14.138207+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:15.138314+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:16.138466+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4681800 session 0x55b6f2ed0000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5089000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:17.138754+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:18.138886+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:19.139053+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:20.139230+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:21.139412+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:22.139593+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:23.139733+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:24.139857+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:25.140095+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:26.140282+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f45b1c00 session 0x55b6f2eb9a40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5eda000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:27.140413+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:28.140534+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:29.140755+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:30.140937+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:31.141171+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:32.141385+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:33.141521+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:34.141683+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:35.141832+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:36.142019+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:37.142216+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:38.142460+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:39.142605+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:40.142799+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:41.142961+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:42.143114+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:43.143260+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:44.143409+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:45.143525+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:46.143610+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:47.143709+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:48.143913+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:49.144056+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:50.144206+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:51.144336+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:52.144464+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:53.144563+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:54.144670+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:55.144826+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:56.144971+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:57.145092+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:58.145237+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:20:59.145381+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:00.145596+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:01.145736+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5ac00 session 0x55b6f47f5e00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f45b1c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:02.145946+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:03.146126+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:04.146246+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:05.146386+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:06.146572+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:07.146743+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:08.146929+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680c00 session 0x55b6f3ef2000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b5ac00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:09.147071+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:10.147328+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:11.147467+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:12.147630+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:13.147864+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:14.148331+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:15.148476+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:16.148627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:17.149732+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:18.149922+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:19.150212+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:20.150788+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:21.151309+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f207b000 session 0x55b6f4d59e00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4d63c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:22.151945+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:23.152313+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:24.152447+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:25.152909+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:26.153106+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:27.153398+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:28.153739+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:29.154071+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:30.154248+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:31.154367+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:32.154498+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5eda400 session 0x55b6f4224780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f24800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:33.154756+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:34.154880+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:35.155133+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:36.155329+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:37.155458+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:38.155625+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:39.155806+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:40.156032+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:41.156183+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:42.156336+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:43.156540+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:44.156727+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:45.156874+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:46.157023+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:47.157173+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:48.157344+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:49.157519+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:50.157729+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f24400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f4fcaf00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:51.157894+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459cc00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f29545a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 37773312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:52.158084+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 37773312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f219c800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 313.078704834s of 313.278930664s, submitted: 80
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:53.158475+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f219c800 session 0x55b6f4225e00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f65e3400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f65e3400 session 0x55b6f4d58b40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:54.158649+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:55.158823+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299834 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:56.158979+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f219c800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 37740544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:57.159186+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 37691392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:58.159318+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 37650432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:21:59.159476+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 37634048 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:00.159736+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:01.160219+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:02.160478+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:03.160690+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:04.160960+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:05.161111+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:06.161287+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:07.161428+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:08.161597+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:09.161800+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:10.162003+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:11.162165+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:12.162307+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:13.162491+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:14.162726+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:15.162919+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:16.163116+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:17.163264+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:18.163404+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:19.163545+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:20.163705+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:21.163877+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:22.164081+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:23.164287+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:24.164470+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:25.164637+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:26.164748+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:27.164929+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:28.165147+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:29.165482+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:30.165767+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:31.166013+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:32.166255+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:33.166475+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:34.166738+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:35.166938+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:36.167150+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:37.167370+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:38.167541+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:39.167778+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:40.168066+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:41.168324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:42.168495+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:43.168774+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:44.169035+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:45.169170+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:46.169361+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:47.169499+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:48.169704+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:49.169896+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:50.170117+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:51.170252+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:52.170488+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:53.170716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:54.170873+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:55.171131+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:56.171365+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:57.171504+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:58.171702+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:22:59.171860+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:00.172038+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:01.172152+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:02.172305+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:03.172543+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:04.172718+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:05.172934+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:06.173105+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:07.173311+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:08.173507+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:09.173720+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:10.173894+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:11.174088+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:12.174243+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:13.174422+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:14.174561+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:15.174728+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:16.174883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:17.175039+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:18.175264+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:19.175396+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:20.175601+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:21.175767+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:22.176006+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:23.176217+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:24.176434+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:25.176643+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:26.176842+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:27.176976+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:28.177099+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:29.177327+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:30.177595+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:31.177772+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:32.177939+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:33.178078+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:34.178215+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:35.178417+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:36.178584+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:37.178746+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:38.178952+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f24400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f1996b40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4de34a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:39.179145+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459cc00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 106.694984436s of 106.934410095s, submitted: 93
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:40.179316+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f4ddb860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:41.179465+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:42.179587+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5c2c000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:43.179733+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5c2c000 session 0x55b6f19961e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4f83000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135897088 unmapped: 37281792 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:44.179996+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:45.180212+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:46.180390+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:47.180593+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:48.180797+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:49.180977+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:50.181173+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:51.181355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:52.181579+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:53.181781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:54.181948+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2a0c400 session 0x55b6f3ed45a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f24400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:55.182058+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:56.182219+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:57.182414+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:58.182724+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:23:59.182975+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:00.183283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:01.183488+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:02.183725+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:03.183924+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:04.184121+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 48K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 16K writes, 5677 syncs, 2.87 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 984 writes, 1511 keys, 984 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 984 writes, 466 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:05.184268+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:06.184451+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:07.184613+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:08.184776+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:09.184930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:10.185144+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2026000 session 0x55b6f47f5680
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3d2d000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:11.185286+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:12.185439+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:13.185618+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:14.185770+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:15.185942+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:16.186150+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:17.186329+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:18.186463+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:19.186710+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:20.186915+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:21.187071+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:22.187318+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:23.187481+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:24.187603+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:25.187817+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:26.188057+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:27.188279+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:28.188457+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:29.188697+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:30.188934+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:31.189201+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:32.189423+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:33.189645+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:34.189834+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:35.190049+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:36.190212+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:37.190467+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:38.190651+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:39.190838+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:40.191059+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:41.191283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:42.191471+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:43.191712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:44.191907+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:45.192082+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:46.192265+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:47.192438+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:48.192605+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:49.192795+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:50.192990+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:51.193141+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:52.193287+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:53.193500+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:54.193742+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:55.193978+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:56.194211+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:57.194423+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:58.194631+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:24:59.194847+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:00.195077+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:01.195315+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:02.195607+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:03.195800+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:04.195973+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:05.196177+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:06.196460+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:07.196726+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:08.196895+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:09.197098+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:10.197289+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:11.197557+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:12.197790+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:13.198130+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:14.198265+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:15.198446+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:16.198699+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:17.198929+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:18.199142+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:19.199339+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:20.199526+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:21.199824+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:22.200098+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:23.200305+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:24.200634+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:25.200867+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:26.201114+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:27.201340+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:28.201545+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:29.201828+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:30.202093+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:31.206707+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:32.209347+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:33.213391+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:34.216087+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:35.216565+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:36.217190+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:37.219873+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:38.222238+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:39.223883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:40.224929+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:41.225528+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:42.225986+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:43.226350+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:44.226589+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:45.226872+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:46.227142+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:47.227297+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:48.227460+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:49.227713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:50.227911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:51.228067+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:52.228315+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:53.228533+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:54.228734+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:55.228916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:56.229040+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:57.229308+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:58.229502+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:25:59.229719+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:00.270050+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:01.270178+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:02.270262+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:03.270406+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:04.270521+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:05.270635+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:06.270783+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:07.270911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2a0d400 session 0x55b6f4fca960
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f459cc00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:08.271070+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:09.271200+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:10.271348+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:11.271502+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:12.271694+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:13.271869+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:14.271995+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:15.272148+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:16.272312+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:17.272467+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:18.272629+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f40bb800 session 0x55b6f4b1c780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5c2c000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:19.272738+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:20.272883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:21.273005+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:22.273157+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:23.273317+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:24.273444+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:25.273633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:26.273807+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:27.274011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:28.274150+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:29.274270+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 169.828842163s of 170.185073853s, submitted: 32
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 35717120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:30.274422+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f207bc00 session 0x55b6f4ddb0e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4f82400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 35659776 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:31.274544+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:32.274695+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:33.274838+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:34.274996+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:35.275117+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:36.275283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:37.275462+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:38.275625+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:39.275802+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:40.276098+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:41.276517+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:42.276642+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:43.276789+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:44.277011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:45.277214+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:46.277364+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:47.277518+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:48.277729+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:49.277882+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:50.278084+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:51.278244+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:52.278433+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:53.278584+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:54.278756+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:55.278908+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:56.279072+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5157000 session 0x55b6f45f45a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2006000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:57.279210+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:58.279420+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:26:59.279609+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:00.279824+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:01.280017+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:02.280185+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:03.280319+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:04.280425+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:05.280546+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:06.280692+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:07.280847+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:08.281006+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:09.281193+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:10.281418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:11.281534+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:12.281917+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:13.282225+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:14.282417+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:15.282541+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:16.282713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:17.282859+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:18.282983+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:19.283122+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:20.283324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:21.283532+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:22.283753+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:23.283865+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:24.284036+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:25.284177+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:26.284389+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:27.284513+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:28.284710+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:29.284878+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:30.285097+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:31.285237+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:32.285371+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:33.285529+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:34.285678+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:35.285830+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:36.285965+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:37.286103+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:38.286436+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:39.286678+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:40.286947+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:41.289168+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:42.290859+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:43.291383+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:44.292927+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:45.294219+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:46.294444+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:47.294711+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:48.295100+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:49.296710+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:50.296925+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:51.297544+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:52.297802+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:53.298077+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:54.298692+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:55.299173+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:56.299610+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:57.299870+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:58.300086+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:27:59.300786+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:00.301209+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:01.301614+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:02.301894+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:03.302125+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:04.302570+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:05.302942+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:06.303415+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:07.303702+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:08.303926+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:09.304119+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:10.304324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:11.304481+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:12.304604+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:13.304787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:14.304940+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:15.305099+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:16.305247+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:17.305365+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:18.305452+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:19.305563+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:20.305724+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:21.305868+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:22.306013+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:23.306157+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:24.306276+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:25.306410+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:26.306519+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:27.306627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:28.306797+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:29.306955+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:30.307180+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:31.307315+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:32.307403+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:33.307544+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:34.307675+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:35.307775+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:36.307920+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:37.308033+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:38.308199+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:39.308341+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:40.308511+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:41.308631+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:42.308775+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:43.308911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:44.309056+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:45.309195+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:46.309331+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:47.309491+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:48.309738+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:49.309904+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:50.310142+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:51.310307+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:52.310522+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:53.310699+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:54.310854+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:55.311021+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:56.311235+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:57.311418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:58.311632+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:28:59.311856+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:00.312120+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:01.312250+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:02.312382+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:03.312571+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:04.312698+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:05.312846+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:06.313045+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:07.313216+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:08.313341+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:09.313551+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:10.313776+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:11.314002+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:12.314221+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:13.314452+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:14.314597+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:15.314781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:16.314964+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:17.315095+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:18.315226+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:19.315367+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:20.315560+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:21.315707+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:22.315828+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:23.315947+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:24.316110+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:25.316227+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:26.316393+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:27.316523+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:28.316648+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:29.316813+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:30.316972+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:31.317087+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:32.317254+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:33.317429+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:34.317545+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:35.317759+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:36.317906+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:37.318042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:38.318166+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:39.318313+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:40.318463+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:41.318693+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:42.318877+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680800 session 0x55b6f2aefc20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5157000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:43.319007+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:44.319132+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:45.319284+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:46.319401+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:47.319548+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:48.319930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:49.320263+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:50.320595+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:51.320884+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:52.321104+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:53.321236+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:54.321416+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:55.321592+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:56.321786+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:57.321959+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:58.322678+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:29:59.322868+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:00.323085+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:01.323296+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:02.323480+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:03.323729+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:04.323928+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:05.324182+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:06.324338+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:07.324475+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:08.324653+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:09.325049+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:10.325209+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:11.325422+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:12.325593+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:13.325786+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:14.326131+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:15.326293+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:16.326439+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:17.326646+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:18.326835+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:19.327018+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:20.327184+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:21.327296+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:22.327436+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:23.327615+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:24.327781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:25.327916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:26.328068+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:27.328220+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:28.328379+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:29.328510+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:30.328645+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f73da400 session 0x55b6f2a75680
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4680800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:31.328855+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5bc00 session 0x55b6f4224b40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f73da400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:32.329027+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:33.329218+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:34.329385+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:35.329527+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:36.329690+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:37.329869+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:38.330087+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:39.330300+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:40.330489+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:41.330612+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:42.330773+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:43.330854+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:44.330991+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:45.331136+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:46.331260+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:47.331401+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4d62000 session 0x55b6f47f4f00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5087800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:48.331518+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:49.331648+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:50.331896+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:51.332050+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:52.332614+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:53.334283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:54.334827+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:55.335039+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b58c00 session 0x55b6f47f4000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7444800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:56.335497+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:57.337632+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:58.338036+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:30:59.338910+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:00.339173+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:01.340272+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f1996000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b58c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:02.340728+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:03.341231+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:04.341564+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:05.342116+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:06.342336+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:07.343118+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:08.343300+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:09.344984+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:10.345197+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:11.346898+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:12.347107+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:13.347455+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:14.347715+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:15.348326+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:16.348598+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:17.348881+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:18.349128+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:19.350007+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:20.350298+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:21.350588+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:22.350795+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459f800 session 0x55b6f2955860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f3025000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:23.351073+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:24.351283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:25.351436+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:26.351612+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:27.351944+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:28.352111+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:29.352506+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:30.352686+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:31.352930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:32.353137+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:33.353295+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:34.353442+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:35.353787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:36.353943+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:37.354164+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:38.354334+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:39.354638+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:40.354847+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:41.355207+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:42.355320+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:43.355475+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:44.355617+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:45.355799+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:46.356017+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:47.356222+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:48.356424+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:49.356808+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:50.357071+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:51.357397+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:52.357536+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:53.357851+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-22T15:31:54.358069+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _finish_auth 0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:54.359583+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:55.358359+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:56.358581+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:57.358794+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:58.358978+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:59.359132+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:00.359339+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:01.359480+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:02.359640+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:03.359819+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:04.359968+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:05.360161+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:06.360334+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:07.360446+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:08.360623+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:09.360774+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:10.360960+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:11.361126+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:12.361251+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:13.361415+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:14.361626+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:15.361757+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459ec00 session 0x55b6f4b1c5a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b59c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:16.361916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:17.362048+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:18.362237+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:19.362432+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:20.362609+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:21.362774+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231400 session 0x55b6f45f54a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7443c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231800 session 0x55b6f4db4d20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7231400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:22.362957+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:23.363089+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:24.363282+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:25.363508+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:26.363894+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:27.364063+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:28.364208+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:29.364383+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:30.364560+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:31.364720+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514c400 session 0x55b6f3ef3e00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5087400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:32.364863+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:33.365021+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:34.365195+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:35.365365+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:36.365527+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:37.365794+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:38.365951+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:39.366118+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:40.366350+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:41.366498+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:42.366623+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:43.366793+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:44.366939+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:45.367050+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:46.367178+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:47.367353+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:48.367510+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:49.367765+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:50.367969+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:51.368159+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:52.368328+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:53.368442+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:54.368576+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:55.368725+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:56.368887+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:57.369116+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:58.369258+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:59.369448+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:00.369713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:01.370863+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:02.371770+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:03.372449+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:04.372941+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:05.373078+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:06.373424+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:07.373716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:08.373978+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:09.374132+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:10.374712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:11.376167+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:12.376802+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:13.377011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:14.377200+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:15.378171+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:16.378604+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:17.379460+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:18.379849+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:19.380459+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:20.380819+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:21.381387+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:22.381800+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:23.382316+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:24.382737+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:25.383318+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:26.383598+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:27.383822+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:28.383988+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:29.384203+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:30.384459+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:31.384643+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:32.384836+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:33.384976+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:34.385145+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:35.385324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:36.385522+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:37.385756+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:38.385929+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:39.386090+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:40.386316+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:41.386461+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:42.386610+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:43.386704+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2fb9c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2fb9c00 session 0x55b6f4ddaf00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5400800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:44.386894+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:45.950046+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5400800 session 0x55b6f20dc5a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5155400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 435.237915039s of 436.059173584s, submitted: 300
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5155400 session 0x55b6f45f43c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:46.950220+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:47.950355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352686 data_alloc: 218103808 data_used: 18558976
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _renew_subs
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:48.950518+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b58800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b58800 session 0x55b6f4dda780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2007000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:49.950640+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2007000 session 0x55b6f42252c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2fb9c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2fb9c00 session 0x55b6f4ddad20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:50.950833+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:51.950994+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:52.951140+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f3c65000 session 0x55b6f2bdba40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5155400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428763 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:53.951272+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:54.951559+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f7442c00 session 0x55b6f4de23c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5400800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:55.951721+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:56.951882+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:57.952070+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428763 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:58.952227+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b58800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.197553635s of 13.351867676s, submitted: 36
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:59.952368+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:00.952531+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:01.952648+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:02.952785+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361857 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b58800 session 0x55b6f2eb92c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:03.952896+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 50K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                           Cumulative WAL: 16K writes, 6024 syncs, 2.82 writes per sync, written: 0.04 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 701 writes, 1265 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s
                                           Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:04.953022+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:05.953252+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:06.953373+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:07.953522+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:08.953685+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:09.953807+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:10.953959+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:11.954194+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:12.955421+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f7443000 session 0x55b6f20ddc20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2027400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: mgrc ms_handle_reset ms_handle_reset con 0x55b6f5152000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1334415348
Jan 22 15:45:41 compute-1 ceph-osd[79044]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1334415348,v1:192.168.122.100:6801/1334415348]
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: get_auth_request con 0x55b6f2a0c400 auth_method 0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: mgrc handle_mgr_configure stats_period=5
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:13.955561+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5084000 session 0x55b6f4b1cf00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f65e3000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f4ff0400 session 0x55b6f2b585a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7230000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:14.955875+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:15.956255+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:16.956437+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:17.956586+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:18.957534+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:19.957707+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:20.958244+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:21.958383+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:22.958607+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:23.958772+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:24.959082+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:25.959293+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2f87400 session 0x55b6f227d4a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5086400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:26.959466+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:27.959768+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:28.960011+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:29.960244+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:30.960495+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:31.960716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:32.960977+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:33.961208+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:34.961422+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:35.961633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:36.961770+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:37.961887+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:38.962041+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:39.962177+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:40.962353+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:41.962480+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:42.962607+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:43.962754+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:44.962918+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:45.963138+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:46.963346+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:47.963548+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:48.963772+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:49.963930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:50.964324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:51.964429+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:52.964560+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:53.964688+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:54.964892+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:55.965041+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:56.965285+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:57.965434+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:58.965643+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:59.965882+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:00.966143+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:01.966279+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:02.966436+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:03.966604+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:04.966761+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:05.966953+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:06.967106+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:07.967281+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:08.967390+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:09.967527+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:10.967713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:11.967839+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:12.967979+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:13.968107+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:14.968284+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:15.968465+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5089000 session 0x55b6f2f721e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2f87400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:16.968626+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:17.968884+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:18.969040+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:19.969217+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:20.969418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:21.969558+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:22.969728+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:23.969911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:24.970060+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:25.970257+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:26.970427+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5eda000 session 0x55b6f4de2780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5152400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:27.970786+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:28.971171+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:29.971373+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:30.971606+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:31.971781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:32.971941+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:33.972107+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:34.972342+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:35.972552+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:36.972744+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:37.972913+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:38.973065+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:39.973239+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:40.973411+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:41.973588+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:42.973683+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:43.973890+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:44.974090+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:45.974268+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:46.974476+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:47.974704+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:48.974875+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:49.975060+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:50.975245+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:51.975435+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:52.975582+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:53.975680+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:54.975848+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:55.975971+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:56.976109+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:57.976222+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:58.976345+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:59.976459+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:00.976623+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:01.976768+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f45b1c00 session 0x55b6f4b0e960
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5eda000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:02.976901+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:03.977053+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:04.977237+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:05.977411+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:06.977567+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:07.977739+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b5ac00 session 0x55b6f2b59680
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f45b1c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:08.977887+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:09.978079+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:10.978301+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:11.978487+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:12.978653+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:13.978855+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:14.979040+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:15.979274+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:16.979596+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:17.979903+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:18.980081+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:19.980437+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:20.980694+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f4d63c00 session 0x55b6f2f545a0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f7442400
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:21.981142+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:22.981306+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:23.981495+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:24.981713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:25.981859+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 147.082885742s of 147.144195557s, submitted: 18
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:26.982008+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:27.982181+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:28.982367+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:29.982537+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 35774464 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:30.982778+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [0,0,0,0,1])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:31.982930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2f24800 session 0x55b6f3ed4960
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f4d60800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:32.983074+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 35676160 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:33.983194+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361137 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 35627008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:34.983354+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:35.983538+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:36.983720+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:37.983864+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:38.983996+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:39.984471+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:40.984847+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:41.985032+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:42.985146+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:43.985458+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:44.985794+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:45.986052+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:46.986781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:47.986913+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:48.987025+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:49.987211+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:50.987383+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:51.987534+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 35553280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:52.987775+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 35553280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:53.987925+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:54.988059+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:55.988223+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:56.988364+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:57.988543+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:58.988627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:59.988775+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:00.989073+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:01.989185+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:02.989313+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:03.989444+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:04.989557+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:05.989721+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:06.989869+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:07.990028+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:08.990208+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:09.990330+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:10.990526+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:11.990655+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:12.990820+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:13.990953+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:14.991094+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:15.991226+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:16.991370+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:17.991566+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:18.993375+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:19.995130+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:20.995900+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:21.997341+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:22.998160+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:23.998434+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:24.999394+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:26.000019+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:27.000415+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:28.000574+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b59800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.592838287s of 61.872817993s, submitted: 329
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:29.001172+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 35479552 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2417686 data_alloc: 218103808 data_used: 18567168
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:30.001720+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 35479552 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b35fc000/0x0/0x1bfc00000, data 0x8a7779d/0x8052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:31.001927+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _renew_subs
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 181 ms_handle_reset con 0x55b6f5b59800 session 0x55b6f45f41e0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:32.002102+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5088000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: get_auth_request con 0x55b6f15a7400 auth_method 0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:33.002268+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:34.002441+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 182 heartbeat osd_stat(store_statfs(0x1b35f6000/0x0/0x1bfc00000, data 0x8a7b0dd/0x8058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2423818 data_alloc: 218103808 data_used: 18579456
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 182 ms_handle_reset con 0x55b6f5088000 session 0x55b6f227c3c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:35.002704+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:36.002943+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:37.003149+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:38.003313+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:39.003478+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2369546 data_alloc: 218103808 data_used: 18575360
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 182 heartbeat osd_stat(store_statfs(0x1b3df6000/0x0/0x1bfc00000, data 0x827b0ba/0x7857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f65e2c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 182 ms_handle_reset con 0x55b6f65e2c00 session 0x55b6f2f72780
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.586056709s of 10.986593246s, submitted: 44
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:40.003633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:41.003861+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:42.004043+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:43.004215+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:44.004472+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:45.004707+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:46.005000+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:47.005219+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:48.005403+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:49.005550+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:50.005687+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:51.005864+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:52.005999+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:53.006094+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:54.006199+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:55.006483+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:56.006594+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:57.006713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:58.006851+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:59.007003+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:00.007104+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:01.007302+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:02.007462+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:03.007621+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:04.007710+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:05.007873+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:06.008004+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:07.008170+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:08.008317+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:09.008508+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:10.008633+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:11.008821+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:12.008938+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:13.009108+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:14.009239+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:15.009413+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:16.009540+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:17.009646+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:18.009780+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:19.009957+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:20.010074+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:21.010243+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:22.010396+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:23.010921+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:24.012255+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:25.014084+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:26.014596+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:27.018544+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:28.019296+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:29.021525+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:30.022218+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:31.022901+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:32.023492+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:33.025612+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:34.026089+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:35.026235+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:36.026692+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:37.027095+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:38.027466+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:39.028330+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:40.028706+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:41.029187+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:42.029418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:43.030278+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:44.030531+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:45.030981+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:46.031206+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:47.031564+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:48.031789+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:49.032163+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:50.032404+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:51.032625+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:52.032787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:53.033060+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:54.033211+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f47f5860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2fb9c00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:55.033493+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:56.033720+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:57.033858+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:58.034019+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:59.034283+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:00.034472+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:01.034714+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:02.034850+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:03.034990+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:04.035162+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:05.035297+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:06.035428+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [1])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:07.035597+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:08.035779+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:09.035971+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:10.036148+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f473cb40
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5088000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:11.036352+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:12.036484+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:13.036697+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:14.036858+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:15.037051+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:16.037243+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:17.037428+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:18.037499+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:19.037632+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:20.037785+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:21.037985+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:22.038150+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:23.038294+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:24.038391+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:25.038537+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:26.038734+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:27.038911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:28.043071+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:29.045308+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:30.045705+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:31.047870+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:32.049916+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:33.052890+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:34.054434+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:35.055102+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:36.057316+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:37.057490+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:38.059094+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:39.059911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:40.060481+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:41.060689+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:42.061216+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:43.061850+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:44.062597+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:45.063095+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:46.063304+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:47.063805+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:48.064085+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:49.064587+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:50.064966+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:51.065302+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:52.065645+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:53.065835+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:54.066210+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:55.066437+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:56.066641+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:57.066880+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:58.067181+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:59.067392+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:00.067599+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:01.067752+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:02.067883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:03.068062+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:04.068191+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:05.068332+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:06.068529+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:07.068699+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:08.068821+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:09.068960+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:10.069080+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:11.069240+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:12.069392+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:13.069570+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:14.069781+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:15.069950+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:16.070109+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:17.070221+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:18.070438+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:19.070575+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:20.070728+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:21.070898+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:22.071003+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:23.071115+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:24.071250+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:25.071413+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:26.071582+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:27.071714+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:28.071868+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:29.071955+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:30.072072+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:31.072222+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:32.072940+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:33.073128+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:34.074267+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:35.077050+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:36.078630+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:37.079230+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:38.080012+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:39.080203+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:40.080469+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:41.081140+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:42.081705+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:43.082233+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:44.082491+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:45.082789+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:46.083238+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:47.083694+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:48.083840+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:49.084032+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:50.084217+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:51.084427+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:52.084652+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:53.084888+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:54.085041+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:55.085183+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:56.085330+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:57.085515+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:58.085710+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:59.085837+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:00.085979+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:01.086418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:02.086628+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:03.086883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:04.087141+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:05.087323+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:06.087497+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:07.087716+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f4225860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b58800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:08.087911+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:09.088106+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:10.088290+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:11.088498+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:12.088746+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:13.088948+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:14.089086+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:15.089257+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:16.089424+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:17.089592+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:18.089774+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5c2c000 session 0x55b6f4b0fc20
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5b59800
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:19.089992+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:20.090268+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:21.090432+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:22.090719+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:23.090868+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:24.091068+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:25.091238+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:26.091423+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:27.091582+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:28.091810+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:29.092036+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:30.092214+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:31.092382+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f4f82400 session 0x55b6f3ed4000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f722ec00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:32.092797+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:33.092935+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:34.093148+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:35.093327+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:36.093516+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:37.093649+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:38.093821+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:39.093930+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:40.094096+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:41.094289+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:42.094408+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:43.094576+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:44.094748+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:45.094931+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:46.095160+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:47.095315+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:48.095492+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:49.095691+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:50.095875+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:51.096109+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:52.096296+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:53.096496+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:54.096648+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:55.096846+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:56.097014+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f2006000 session 0x55b6f4481e00
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5086000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:57.097172+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:58.097414+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:59.097590+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:00.097787+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:01.097987+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:02.098116+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:03.098305+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:04.098481+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:05.098692+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:06.098895+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:07.099107+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:08.099281+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:09.099486+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:10.099724+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:11.099927+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:12.100051+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:13.100233+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:14.100409+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:15.100627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:16.100800+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:17.101014+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:18.101211+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:19.101418+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:20.101641+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:21.101876+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:22.102016+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:23.102189+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:24.102394+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:25.102569+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:26.102730+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:27.102919+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:28.103044+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:29.103295+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:30.103452+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:31.103683+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:32.103965+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:33.104150+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:34.104368+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:35.104602+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:36.104791+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:37.105037+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:38.105213+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:39.105448+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:40.105707+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:41.105897+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:42.106701+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:43.106877+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:44.107228+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:45.107855+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:46.108038+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:47.109636+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:48.110536+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:49.110983+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:50.111397+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:51.112962+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:52.113727+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:53.114334+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:54.114693+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:55.115511+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:56.115964+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:57.116427+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:58.116888+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:59.117088+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:00.117259+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:01.117889+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:02.118104+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:03.118480+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:04.118713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:05.118919+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:06.119217+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:07.119627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:08.120014+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:09.120268+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:10.120469+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:11.120802+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:12.121123+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:13.121461+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:14.121629+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:15.121819+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:16.121958+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:17.122104+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:18.122251+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:19.122426+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:20.122607+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:21.122859+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:22.123042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:23.123202+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:24.123353+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:25.123524+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:26.123605+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:27.123843+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:28.123986+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:29.124167+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:30.124371+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:31.124572+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:32.124713+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:33.124879+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:34.125048+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:35.125167+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:36.125355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:37.125507+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:38.125719+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:39.125846+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:40.125995+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:41.126228+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:42.126416+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:43.126615+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:44.126764+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5156000 session 0x55b6f2f73860
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f2006000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:45.126918+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:46.128233+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:47.131712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:48.133042+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:49.135849+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:50.136190+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:51.136579+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:52.136831+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:53.137324+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:54.138319+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:55.138995+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:56.139297+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:57.139433+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:58.139604+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:59.139774+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:00.140200+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:01.140367+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:02.140618+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:03.140972+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:04.141192+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.5 total, 600.0 interval
                                           Cumulative writes: 17K writes, 51K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                           Cumulative WAL: 17K writes, 6346 syncs, 2.79 writes per sync, written: 0.04 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 709 writes, 1322 keys, 709 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 709 writes, 322 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:05.141345+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:06.141735+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:07.141985+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:08.142394+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:09.142528+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:10.142696+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:11.142883+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:12.143136+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:13.143298+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:14.143433+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:15.143558+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:16.143769+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:17.144007+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:18.144181+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:19.144339+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:20.144444+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:21.144652+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:22.144838+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:23.145013+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:24.145189+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:25.145298+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:26.145471+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:27.145608+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:28.145758+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:29.146008+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:30.146153+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:31.146309+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:32.146440+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:33.146599+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:34.146756+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:35.146908+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:36.147043+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:37.147191+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:38.147356+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:39.147523+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:40.147712+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:41.147833+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:42.148040+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5157000 session 0x55b6f41ce3c0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: handle_auth_request added challenge on 0x55b6f5156000
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:43.148197+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:44.148400+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:45.148553+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:46.148761+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:47.148932+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:48.149176+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:49.149316+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:50.149483+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:51.155328+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:52.155627+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:53.155865+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:54.156406+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:55.156761+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:56.156955+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:57.158355+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:58.158990+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:59.159733+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:00.159970+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:01.160252+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:02.160462+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:03.160588+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:04.160755+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:05.160875+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:06.161006+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:07.161161+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:08.161269+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 35266560 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'config diff' '{prefix=config diff}'
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:09.161393+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'config show' '{prefix=config show}'
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 138215424 unmapped: 34963456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:10.161563+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 138067968 unmapped: 35110912 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:41 compute-1 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:41 compute-1 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: tick
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_tickets
Jan 22 15:45:41 compute-1 ceph-osd[79044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:11.161722+0000)
Jan 22 15:45:41 compute-1 ceph-osd[79044]: do_command 'log dump' '{prefix=log dump}'
Jan 22 15:45:41 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:41 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:41 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 15:45:42 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3457684757' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.28705 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1561478504' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/261441280' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.27626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/195662771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2764929686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2047798707' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3523237783' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3457684757' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 15:45:42 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/150362007' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:42 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 15:45:42 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1894221034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:43 compute-1 crontab[261367]: (root) LIST (root)
Jan 22 15:45:43 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 15:45:43 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1042555888' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.28747 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.18708 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.27647 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.28771 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.18723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.27665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2917217955' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: pgmap v4161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/157024880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/150362007' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4174554485' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2148250249' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/928646561' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1894221034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4016002785' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1868857439' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3731501614' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1042555888' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:43 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:43 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:43 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 15:45:44 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2640204771' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.27677 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.27683 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.27695 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.18774 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.28819 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.27707 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4271205578' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4176284565' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/52836653' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2181286671' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/151730726' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2640204771' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:44 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 15:45:44 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/527252969' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 15:45:45 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1922483207' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 15:45:45 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/192074747' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 15:45:45 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2776501217' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 15:45:45 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1535643559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.18789 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.27731 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.28843 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.18795 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.27743 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.28858 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: pgmap v4162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/527252969' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3267566899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1922483207' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/441006344' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2074599857' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/192074747' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2776501217' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2598294737' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:45 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/274536513' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:45 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:45 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:45 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 22 15:45:46 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/21857564' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 15:45:46 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2270236522' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 15:45:46 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4187776781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:46 compute-1 systemd[1]: Starting Hostname Service...
Jan 22 15:45:46 compute-1 systemd[1]: Started Hostname Service.
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 15:45:46 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3353346489' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:46 compute-1 podman[261836]: 2026-01-22 15:45:46.894983694 +0000 UTC m=+0.154553681 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:45:46 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 15:45:46 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/327179314' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.28879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.27764 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.28897 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.18837 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.28912 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3308796185' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1535643559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2321101815' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.28924 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/786694620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/21857564' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/449513707' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2270236522' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.28936 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2361371598' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1364877196' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: pgmap v4163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1078206682' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4187776781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1837579571' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/892894112' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1216321723' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 15:45:47 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3986922559' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 15:45:47 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/715362271' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.547 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:45:47 compute-1 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.547 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:45:47 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 15:45:47 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268155440' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:47 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:47 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:47.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.28951 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3353346489' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/327179314' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2477755921' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/700747628' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.28969 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4260425190' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3986922559' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/715362271' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/536748687' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/4268155440' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1073887977' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1452673441' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1543650651' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 15:45:48 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1842434873' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 15:45:49 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352944826' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.27863 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.28999 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.27890 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3410981744' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1842434873' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2772120358' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1048087941' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: pgmap v4164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4074908786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4169395491' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2500690000' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3352944826' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:49 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3388168716' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:49 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 15:45:49 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3017593338' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:49 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:49 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:49 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 15:45:50 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2359320859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.18954 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.27920 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.18948 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.18969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.27935 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.18975 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1693688001' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.18984 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1153195434' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3017593338' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2448017329' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2247528220' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/2359320859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1925810316' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3813666031' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 15:45:50 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473160357' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.19002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1703807545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.19026 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.27989 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: pgmap v4165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/473160357' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2909934921' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/1371623889' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/317296694' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/917443970' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4082909003' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1058460210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:51 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 15:45:51 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/47089867' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:51 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:51 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:51 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:51.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.19044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.28001 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.19071 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.28025 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.29134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.29140 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/2695759947' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/47089867' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3034816724' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:52 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 22 15:45:52 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3037175596' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.29146 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.29152 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.29161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.29191 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:53 compute-1 ceph-mon[81715]: pgmap v4166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3034816724' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4237834724' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3665761610' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3037175596' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3956659008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 22 15:45:53 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/38607233' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:53 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 22 15:45:53 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1398070699' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:53 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:53 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:53 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:53.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.29209 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.29224 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.19152 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/38607233' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.29239 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/2645925705' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/3424748525' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/1398070699' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/4060242858' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1133738816' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 22 15:45:54 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3445601575' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:54 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 22 15:45:55 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/357813982' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 22 15:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.28142 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: pgmap v4167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/4020820940' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/3445601575' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:55 compute-1 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.100:0/1643388399' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.101:0/357813982' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: from='client.? 192.168.122.102:0/3195370970' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:55 compute-1 ceph-mon[81715]: Health check update: 212 slow ops, oldest one blocked for 7743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:55 compute-1 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 22 15:45:55 compute-1 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/706488296' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 22 15:45:55 compute-1 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 15:45:55 compute-1 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:55 compute-1 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
